paper
stringlengths 14
183
| authors
listlengths 1
95
| abstract
stringlengths 246
3.6k
| link
stringlengths 42
42
| track
stringclasses 2
values | award
stringclasses 3
values | paper_id
stringlengths 10
10
|
|---|---|---|---|---|---|---|
Scaling Unlocks Broader Generation and Deeper Functional Understanding of Proteins
|
[
"Aadyot Bhatnagar",
"Sarthak Jain",
"Joel Beazer",
"Samuel C. Curran",
"Alexander M. Hoffnagle",
"Kyle Shan Ching",
"Michael Martyn",
"Stephen Nayfach",
"Jeffrey A. Ruffolo",
"Ali Madani"
] |
Generative protein language models (PLMs) are powerful tools for designing proteins purpose-built to solve problems in medicine, agriculture, and industrial processes.
Recent work has trained ever larger language models, but there has been little systematic study of the optimal training distributions and the influence of model scale on the sequences generated by PLMs.
We introduce the ProGen3 family of sparse generative PLMs, and we develop compute-optimal scaling laws to scale up to a 46B-parameter model pre-trained on 1.5T amino acid tokens.
ProGen3's pre-training data is sampled from an optimized data distribution over the PPA v1, a carefully curated dataset of 3.4B full-length proteins.
We evaluate for the first time in the wet lab the influence of model scale on the sequences generated by PLMs, and we find that larger models generate viable proteins for a much wider diversity of protein families.
Finally, we find both computationally and experimentally that larger models are more responsive to alignment with laboratory data, resulting in improved protein fitness prediction and sequence generation capabilities.
These results indicate that larger PLMs like ProGen3-46B trained on larger, well-curated datasets are powerful foundation models that push the frontier of protein design.
|
https://openreview.net/forum?id=yvGL2HP7pU
|
Main
|
Spotlight
|
yvGL2HP7pU
|
Precise Information Control in Long-Form Text Generation
|
[
"Jacqueline He",
"Howard Yen",
"Margaret Li",
"Shuyue Stella Li",
"Zhiyuan Zeng",
"Weijia Shi",
"Yulia Tsvetkov",
"Danqi Chen",
"Pang Wei Koh",
"Luke Zettlemoyer"
] |
A central challenge in language models (LMs) is faithfulness hallucination: the generation of information unsubstantiated by input context. To study this problem, we propose Precise Information Control (PIC), a new task formulation that requires models to generate long-form outputs grounded in a provided set of short self-contained statements, without adding any unsupported ones. PIC includes a full setting that tests a model’s ability to include exactly all input claims, and a partial setting that requires the model to selectively incorporate only relevant claims. We present PIC-Bench, a benchmark of eight long-form generation tasks (e.g., summarization, biography generation) adapted to the PIC setting, where LMs are supplied with well-formed, verifiable input claims. Our evaluation of a range of open and proprietary LMs on PIC-Bench reveals that, surprisingly, state-of-the-art LMs still hallucinate against user-provided input in over 70% of generations. To alleviate this lack of faithfulness, we introduce a post-training framework that uses a weakly supervised preference data construction method to train an 8B PIC-LM with stronger PIC ability—improving from 69.1% to 91.0% F1 in the full PIC setting. When integrated into end-to-end factual generation pipelines, PIC-LM improves exact match recall by 17.1% on ambiguous QA with retrieval, and factual precision by 30.5% on a birthplace fact-checking task, underscoring the potential of precisely grounded generation.
|
https://openreview.net/forum?id=yv7zKaptjo
|
Main
|
Poster
|
yv7zKaptjo
|
A Minimalist Example of Edge-of-Stability and Progressive Sharpening
|
[
"Liming Liu",
"Zixuan Zhang",
"Simon Shaolei Du",
"Tuo Zhao"
] |
Recent advances in deep learning optimization have unveiled two intriguing phenomena under large learning rates: Edge of Stability (EoS) and Progressive Sharpening (PS), challenging classical Gradient Descent (GD) analyses. Current research approaches, using either generalist frameworks or minimalist examples, face significant limitations in explaining these phenomena. This paper advances the minimalist approach by introducing a two-layer network with a two-dimensional input, where one dimension is relevant to the response and the other is irrelevant. Through this model, we rigorously prove the existence of progressive sharpening and self-stabilization under large learning rates, and establish non-asymptotic analysis of the training dynamics and sharpness along the entire GD trajectory. Besides, we connect our minimalist example to existing works by reconciling the existence of a well-behaved "stable set" between minimalist and generalist analyses, and extending the analysis of Gradient Flow Solution sharpness to our two-dimensional input scenario. These findings provide new insights into the EoS phenomenon from both parameter and input data distribution perspectives, potentially informing more effective optimization strategies in deep learning practice.
|
https://openreview.net/forum?id=yst8MHfcgP
|
Main
|
Poster
|
yst8MHfcgP
|
Sparc3D: Sparse Representation and Construction for High-Resolution 3D Shapes Modeling
|
[
"Zhihao Li",
"Yufei Wang",
"Heliang Zheng",
"Yihao Luo",
"Bihan Wen"
] |
High-fidelity 3D object synthesis remains significantly more challenging than 2D image generation due to the unstructured nature of mesh data and the cubic complexity of dense volumetric grids. Existing two-stage pipelines—compressing meshes with a VAE (using either 2D or 3D supervision), followed by latent diffusion sampling—often suffer from severe detail loss caused by inefficient representations and modality mismatches introduced in VAE.
We introduce Sparc3D, a unified framework that combines a sparse deformable marching cubes representation Sparcubes with a novel encoder Sparconv-VAE. Sparcubes converts raw meshes into high-resolution ($1024^3$) surfaces with arbitrary topology by scattering signed distance and deformation fields onto a sparse cube, allowing differentiable optimization.
Sparconv-VAE is the first modality-consistent variational autoencoder built entirely upon sparse convolutional networks, enabling efficient and near-lossless 3D reconstruction suitable for high-resolution generative modeling through latent diffusion. Sparc3D achieves state-of-the-art reconstruction fidelity on challenging inputs, including open surfaces, disconnected components, and intricate geometry. It preserves fine-grained shape details, reduces training and inference cost, and integrates naturally with latent diffusion models for scalable, high-resolution 3D generation.
|
https://openreview.net/forum?id=yslRXs9gcJ
|
Main
|
Poster
|
yslRXs9gcJ
|
Transformers Learn Faster with Semantic Focus
|
[
"Parikshit Ram",
"Kenneth L. Clarkson",
"Tim Klinger",
"Shashanka Ubaru",
"Alexander G. Gray"
] |
Various forms of sparse attention have been explored to mitigate the quadratic computational and memory cost of the attention mechanism in transformers. We study sparse transformers not through a lens of efficiency but rather in terms of learnability and generalization. Empirically studying a range of attention mechanisms, we find that input-dependent sparse attention models appear to converge faster and generalize better than standard attention models, while input-agnostic sparse attention models show no such benefits -- a phenomenon that is robust across architectural and optimization hyperparameter choices. This can be interpreted as demonstrating that concentrating a model's "semantic focus" with respect to the tokens currently being considered (in the form of input-dependent sparse attention) accelerates learning. We develop a theoretical characterization of the conditions that explain this behavior. We establish a connection between the stability of the standard softmax and the loss function's Lipschitz properties, then show how sparsity affects the stability of the softmax and the subsequent convergence and generalization guarantees resulting from the attention mechanism. This allows us to theoretically establish that input-agnostic sparse attention does not provide any benefits. We also characterize conditions when semantic focus (input-dependent sparse attention) can provide improved guarantees, and we validate that these conditions are in fact met in our empirical evaluations.
|
https://openreview.net/forum?id=ysgt21bQAM
|
Main
|
Poster
|
ysgt21bQAM
|
Model Inversion with Layer-Specific Modeling and Alignment for Data-Free Continual Learning
|
[
"Ruilin Tong",
"Haodong Lu",
"Yuhang Liu",
"Dong Gong"
] |
Continual learning (CL) aims to incrementally train a model to a sequence of tasks while maintaining performance on previously seen ones. Despite effectiveness in mitigating forgetting, data storage and replay may be infeasible due to privacy or security constraints, and are impractical or unavailable for arbitrary pre-trained models. Data-free or examplar-free CL aims to continually update models with new
tasks without storing previous data. In addition to regularizing updates, we employ model inversion to synthesize data from the trained model, anchoring learned knowledge through replay without retaining old data. However, model inversion in predictive models faces two key challenges. First, generating inputs (e.g., images) solely from highly compressed output labels (e.g., classes) often causes drift between synthetic and real data. Replaying on such synthetic data can contaminate and erode knowledge learned from real data, further degrading inversion quality over time. Second, performing inversion is usually computationally expensive, as each iteration requires backpropagation through the entire model and many steps are needed for convergence. These problems are more severe with large pre-trained models such as Contrastive Language-Image Pre-training (CLIP) models. To improve model inversion efficiency, we propose Per-layer Model Inversion (PMI) approach inspired by the faster convergence of single-layer optimization. The inputs optimized from PMI provide strong initialization for full-model inversion, significantly reducing the number of iterations required for convergence. To address feature distribution shift, we model class-wise feature distribution using a Gaussian distribution and preserve distributional information with a contrastive model. Sampling features for inversion ensures alignment between synthetic and real feature distributions. Combining PMI and feature modeling, we demonstrate the feasibility of incrementally training models on new classes by generating data from pseudo image features mapped through semantic-aware feature projection. Our method shows strong effectiveness and compatibility across multiple CL settings.
|
https://openreview.net/forum?id=yruGxKsZyH
|
Main
|
Poster
|
yruGxKsZyH
|
Vulnerable Data-Aware Adversarial Training
|
[
"Yuqi Feng",
"Jiahao Fan",
"Yanan Sun"
] |
Fast adversarial training (FAT) has been considered as one of the most effective alternatives to the computationally-intensive adversarial training. Generally, FAT methods pay equal attention to each sample of the target task. However, the distance between each sample and the decision boundary is different, learning samples which are far from the decision boundary (i.e., less important to adversarial robustness) brings additional training cost and leads to sub-optimal results. To tackle this issue, we present vulnerable data-aware adversarial training (VDAT) in this study. Specifically, we first propose a margin-based vulnerability calculation method to measure the vulnerability of data samples. Moreover, we propose a vulnerability-aware data filtering method to reduce the training data for adversarial training thus improve the training efficiency. The experiments are conducted in terms of adversarial training and robust neural architecture search on CIFAR-10, CIFAR-100, and ImageNet-1K. The results demonstrate that VDAT is up to 76% more efficient than state-of-the-art FAT methods, while achieving improvements regarding the natural accuracy and adversarial accuracy in both scenarios. Furthermore, the visualizations and ablation studies show the effectiveness of both core components designed in VDAT.
|
https://openreview.net/forum?id=yrrU5YChQr
|
Main
|
Poster
|
yrrU5YChQr
|
Personalized Subgraph Federated Learning with Differentiable Auxiliary Projections
|
[
"Wei Zhuo",
"Zhaohuan Zhan",
"Han Yu"
] |
Federated Learning (FL) on graph-structured data typically faces non-IID challenges, particularly in scenarios where each client holds a distinct subgraph sampled from a global graph. In this paper, we introduce **Fed**erated learning with **Aux**iliary projections (FedAux), a personalized subgraph FL framework that learns to align, compare, and aggregate heterogeneously distributed local models without sharing raw data or node embeddings. In FedAux, each client jointly trains (i) a local GNN and (ii) a learnable auxiliary projection vector (APV) that differentiably projects node embeddings onto a 1D space. A soft-sorting operation followed by a lightweight 1D convolution refines these embeddings in the ordered space, enabling the APV to effectively capture client-specific information. After local training, these APVs serve as compact signatures that the server uses to compute inter‑client similarities and perform similarity‑weighted parameter mixing, yielding personalized models while preserving cross‑client knowledge transfer. Moreover, we provide rigorous theoretical analysis to establish the convergence and rationality of our design. Empirical evaluations across diverse graph benchmarks demonstrate that FedAux substantially outperforms existing baselines in both accuracy and personalization performance. The code is available at [https://github.com/JhuoW/FedAux](https://github.com/JhuoW/FedAux).
|
https://openreview.net/forum?id=yrNw1R8o2W
|
Main
|
Poster
|
yrNw1R8o2W
|
KTAE: A Model-Free Algorithm to Key-Tokens Advantage Estimation in Mathematical Reasoning
|
[
"Wei Sun",
"Wen Yang",
"Pu Jian",
"Qianlong Du",
"Fuwei Cui",
"Shuo Ren",
"Jiajun Zhang"
] |
Recent advances have demonstrated that integrating reinforcement learning with rule-based rewards can significantly enhance the reasoning capabilities of large language models (LLMs), even without supervised fine-tuning (SFT). However, prevalent reinforcement learning algorithms such as GRPO and its variants like DAPO, suffer from a coarse granularity issue when computing the advantage. Specifically, they compute rollout-level advantages that assign identical values to every token within a sequence, failing to capture token-specific contributions. To address this limitation, we propose Key-token Advantage Estimation (KTAE)—a novel algorithm that estimates fine-grained, token-level advantages without introducing additional models. KTAE leverages the correctness of sampled rollouts and applies statistical analysis to quantify the importance of individual tokens within a sequence to the final outcome. This quantified token-level importance is then combined with the rollout-level advantage to obtain a more fine-grained token-level advantage estimation. Empirical results show that models trained with GRPO+KTAE and DAPO+KTAE outperform baseline methods across five mathematical reasoning benchmarks. Notably, they achieve higher accuracy with shorter responses and even surpass R1-Distill-Qwen-1.5B using the same base model.
|
https://openreview.net/forum?id=yqQVRNdmKJ
|
Main
|
Poster
|
yqQVRNdmKJ
|
MotionRAG: Motion Retrieval-Augmented Image-to-Video Generation
|
[
"Chenhui Zhu",
"Yilu Wu",
"Shuai Wang",
"Gangshan Wu",
"Limin Wang"
] |
Image-to-video generation has made remarkable progress with the advancements in diffusion models, yet generating videos with realistic motion remains highly challenging. This difficulty arises from the complexity of accurately modeling motion, which involves capturing physical constraints, object interactions, and domain-specific dynamics that are not easily generalized across diverse scenarios. To address this, we propose MotionRAG, a retrieval-augmented framework that enhances motion realism by adapting motion priors from relevant reference videos through Context-Aware Motion Adaptation (CAMA). The key technical innovations include: (i) a retrieval-based pipeline extracting high-level motion features using video encoder and specialized resamplers to distill semantic motion representations; (ii) an in-context learning approach for motion adaptation implemented through a causal transformer architecture; (iii) an attention-based motion injection adapter that seamlessly integrates transferred motion features into pretrained video diffusion models. Extensive experiments demonstrate that our method achieves significant improvements across multiple domains and various base models, all with negligible computational overhead during inference. Furthermore, our modular design enables zero-shot generalization to new domains by simply updating the retrieval database without retraining any components. This research enhances the core capability of video generation systems by enabling the effective retrieval and transfer of motion priors, facilitating the synthesis of realistic motion dynamics.
|
https://openreview.net/forum?id=yqBIKzTFT8
|
Main
|
Poster
|
yqBIKzTFT8
|
Hierarchical Semantic-Augmented Navigation: Optimal Transport and Graph-Driven Reasoning for Vision-Language Navigation
|
[
"Xiang Fang",
"Wanlong Fang",
"Changshuo Wang"
] |
Vision-Language Navigation in Continuous Environments (VLN-CE) poses a formidable challenge for autonomous agents, requiring seamless integration of natural language instructions and visual observations to navigate complex 3D indoor spaces. Existing approaches often falter in long-horizon tasks due to limited scene understanding, inefficient planning, and lack of robust decision-making frameworks. We introduce the \textbf{Hierarchical Semantic-Augmented Navigation (HSAN)} framework, a groundbreaking approach that redefines VLN-CE through three synergistic innovations. First, HSAN constructs a dynamic hierarchical semantic scene graph, leveraging vision-language models to capture multi-level environmental representations—from objects to regions to zones—enabling nuanced spatial reasoning. Second, it employs an optimal transport-based topological planner, grounded in Kantorovich's duality, to select long-term goals by balancing semantic relevance and spatial accessibility with theoretical guarantees of optimality. Third, a graph-aware reinforcement learning policy ensures precise low-level control, navigating subgoals while robustly avoiding obstacles. By integrating spectral graph theory, optimal transport, and advanced multi-modal learning, HSAN addresses the shortcomings of static maps and heuristic planners prevalent in prior work. Extensive experiments on multiple challenging VLN-CE datasets demonstrate that HSAN achieves state-of-the-art performance, with significant improvements in navigation success and generalization to unseen environments.
|
https://openreview.net/forum?id=ypVW5jvguX
|
Main
|
Poster
|
ypVW5jvguX
|
DCA: Graph-Guided Deep Embedding Clustering for Brain Atlases
|
[
"Mo wang",
"Kaining Peng",
"Jingsheng Tang",
"Hongkai Wen",
"Quanying Liu"
] |
Brain atlases are essential for reducing the dimensionality of neuroimaging data and enabling interpretable analysis. However, most existing atlases are predefined, group-level templates with limited flexibility and resolution. We present Deep Cluster Atlas (DCA), a graph-guided deep embedding clustering framework for generating individualized, voxel-wise brain parcellations. DCA combines a pretrained autoencoder with spatially regularized deep clustering to produce functionally coherent and spatially contiguous regions. Our method supports flexible control over resolution and anatomical scope, and generalizes to arbitrary brain structures. We further introduce a standardized benchmarking platform for atlas evaluation, using multiple large-scale fMRI datasets. Across multiple datasets and scales, DCA outperforms state-of-the-art atlases, improving functional homogeneity by 98.8% and silhouette coefficient by 29%, and achieves superior performance in downstream tasks such as autism diagnosis and cognitive decoding. We also observe that a fine-tuned pretrained model achieves superior results on the corresponding task. Codes and models are available at https://github.com/ncclab-sustech/DCA.
|
https://openreview.net/forum?id=ypPxYsmZPx
|
Main
|
Poster
|
ypPxYsmZPx
|
Learning Human-Object Interaction as Groups
|
[
"Jiajun Hong",
"Jianan Wei",
"Wenguan Wang"
] |
Human-Object Interaction Detection (HOI-DET) aims to localize human-object pairs and identify their interactive relationships. To aggregate contextual cues, existing methods typically propagate information across all detected entities via self‑attention mechanisms, or establish message passing between humans and objects with bipartite graphs. However, they primarily focus on pairwise relationships, overlooking that interactions in real-world scenarios often emerge from collective behaviors ($\textit{i}.\textit{e}.$, multiple humans and objects engaging in joint activities). In light of this, we revisit relation modeling from a $\textit{group}$ view and propose GroupHOI, a framework that propagates contextual information in terms of $\textit{geometric proximity}$ and $\textit{semantic similarity}$. To exploit the geometric proximity, humans and objects are grouped into distinct clusters using a learnable proximity estimator based on spatial features derived from bounding boxes. In each group, a soft correspondence is computed via self-attention to aggregate and dispatch contextual cues. To incorporate the semantic similarity, we enhance the vanilla transformer-based interaction decoder with local contextual cues from HO-pair features. Extensive experiments on HICO-DET and V-COCO benchmarks demonstrate the superiority of GroupHOI over the state-of-the-art methods. It also exhibits leading performance on the more challenging Nonverbal Interaction Detection (NVI-DET) task, which involves varied forms of higher-order interactions within groups.
|
https://openreview.net/forum?id=yoKpumjWXc
|
Main
|
Poster
|
yoKpumjWXc
|
Spectral Analysis of Diffusion Models with Application to Schedule Design
|
[
"Roi Benita",
"Michael Elad",
"Joseph Keshet"
] |
Diffusion models (DMs) have emerged as powerful tools for modeling complex data distributions and generating realistic new samples. Over the years, advanced architectures and sampling methods have been developed to make these models practically usable. However, certain synthesis process decisions still rely on heuristics without a solid theoretical foundation.
In our work, we offer a novel analysis of the DM's inference process, introducing a comprehensive frequency response perspective. Specifically, by relying on Gaussianity assumption, we present the inference process as a closed-form spectral transfer function, capturing how the generated signal evolves in response to the initial noise. We demonstrate how the proposed analysis can be leveraged to design a noise schedule that aligns effectively with the characteristics of the data. The spectral perspective also provides insights into the underlying dynamics and sheds light on the relationship between spectral properties and noise schedule structure. Our results lead to scheduling curves that are dependent on the spectral content of the data, offering a theoretical justification for some of the heuristics taken by practitioners.
|
https://openreview.net/forum?id=ymmY3rrD1t
|
Main
|
Poster
|
ymmY3rrD1t
|
Interpretable and Parameter Efficient Graph Neural Additive Models with Random Fourier Features
|
[
"Thummaluru Siddartha Reddy",
"Vempalli Naga Sai Saketh",
"Mahesh Chandran"
] |
Graph Neural Networks \texttt{(GNNs)} excel at jointly modeling node features and topology, yet their \emph{black-box} nature limits their adoption in real-world applications where interpretability is desired. Inspired by the success of interpretable Neural Additive Models \texttt{(NAM)} for tabular data, Graph Neural Additive Network \texttt{(GNAN)} extends the additive modeling approach to graph data to overcome limitations of GNNs. While being interpretable, \texttt{GNAN} representation learning overlooks the importance of local aggregation and more importantly suffers from parameter complexity. To mitigate the above challenges, we introduce Graph Neural Additive Model with Random Fourier Features (\texttt{G-NAMRFF}), a lightweight, self‐interpretable graph additive architecture. \texttt{G-NAMRFF} represents each node embedding as the sum of feature‐wise contributions where contributions are modeled via a \emph{Gaussian process} \texttt{(GP)} with a graph- and feature-aware kernel. Specifically, we construct a kernel using Radial Basis Function (\texttt{RBF}) with graph structure induced by Laplacian and learnable Finite Impulse Response (\texttt{FIR}) filter. We approximate the kernel using Random Fourier Features (\texttt{RFFs}) which transforms the \texttt{GP} prior to a Bayesian formulation, which are subsequently learnt using a single layer neural network with size equal to number of \texttt{RFF} features. \texttt{G-NAMRFF} is light weight with $168\times$ fewer parameters compared to \texttt{GNAN}. Despite its compact size, \texttt{G-NAMRFF} matches or outperforms state-of-the-art \texttt{GNNs} and \texttt{GNAN} on node and graph classification tasks, delivering real-time interpretability without sacrificing accuracy.
|
https://openreview.net/forum?id=yl9LxRL5tj
|
Main
|
Poster
|
yl9LxRL5tj
|
Predictive Coding Enhances Meta-RL To Achieve Interpretable Bayes-Optimal Belief Representation Under Partial Observability
|
[
"Po-Chen Kuo",
"Han Hou",
"Will Dabney",
"Edgar Y. Walker"
] |
Learning a compact representation of history is critical for planning and generalization in partially observable environments.
While meta-reinforcement learning (RL) agents can attain near Bayes-optimal policies, they often fail to learn the compact, interpretable Bayes-optimal belief states.
This representational inefficiency potentially limits the agent's adaptability and generalization capacity.
Inspired by predictive coding in neuroscience---which suggests that the brain predicts sensory inputs as a neural implementation of Bayesian inference---and by auxiliary predictive objectives in deep RL, we investigate whether integrating self-supervised predictive coding modules into meta-RL can facilitate learning of Bayes-optimal representations.
Through state machine simulation, we show that meta-RL with predictive modules consistently generates more interpretable representations that better approximate Bayes-optimal belief states compared to conventional meta-RL across a wide variety of tasks, even when both achieve optimal policies.
In challenging tasks requiring active information seeking, only meta-RL with predictive modules successfully learns optimal representations and policies, whereas conventional meta-RL struggles with inadequate representation learning.
Finally, we demonstrate that better representation learning leads to improved generalization.
Our results strongly suggest the role of predictive learning as a guiding principle for effective representation learning in agents navigating partial observability.
|
https://openreview.net/forum?id=ykDUVoelgj
|
Main
|
Poster
|
ykDUVoelgj
|
Point3R: Streaming 3D Reconstruction with Explicit Spatial Pointer Memory
|
[
"Yuqi Wu",
"Wenzhao Zheng",
"Jie Zhou",
"Jiwen Lu"
] |
Dense 3D scene reconstruction from an ordered sequence or unordered image collections is a critical step when bringing research in computer vision into practical scenarios. Following the paradigm introduced by DUSt3R, which unifies an image pair densely into a shared coordinate system, subsequent methods maintain an implicit memory to achieve dense 3D reconstruction from more images. However, such implicit memory is limited in capacity and may suffer from information loss of earlier frames. We propose Point3R, an online framework targeting dense streaming 3D reconstruction. To be specific, we maintain an explicit spatial pointer memory directly associated with the 3D structure of the current scene. Each pointer in this memory is assigned a specific 3D position and aggregates scene information nearby in the global coordinate system into a changing spatial feature. Information extracted from the latest frame interacts explicitly with this pointer memory, enabling dense integration of the current observation into the global coordinate system. We design a 3D hierarchical position embedding to promote this interaction and design a simple yet effective fusion mechanism to ensure that our pointer memory is uniform and efficient. Our method achieves competitive or state-of-the-art performance on various tasks with low training costs. Code: https://github.com/YkiWu/Point3R.
|
https://openreview.net/forum?id=yk1iqV9Etr
|
Main
|
Poster
|
yk1iqV9Etr
|
PARTONOMY: Large Multimodal Models with Part-Level Visual Understanding
|
[
"Ansel Blume",
"Jeonghwan Kim",
"Hyeonjeong Ha",
"Elen Chatikyan",
"Xiaomeng Jin",
"Khanh Duy Nguyen",
"Nanyun Peng",
"Kai-Wei Chang",
"Derek Hoiem",
"Heng Ji"
] |
Real-world objects are composed of distinctive, object-specific parts. Identifying these parts is key to performing fine-grained, compositional reasoning—yet, large multimodal models (LMMs) struggle to perform this seemingly straightforward task. In this work, we introduce PARTONOMY, an LMM benchmark designed for pixel-level part grounding. We construct PARTONOMY from existing part datasets and our own rigorously annotated set of images, encompassing 862 parts and 5346
objects for evaluation. Unlike existing datasets that simply ask models to identify generic parts, PARTONOMY utilizes highly technical concepts and challenges models to compare objects’ parts, consider part-whole relationships, and justify textual predictions with visual segmentations. Our experiments demonstrate significant limitations in state-of-the-art LMMs (e.g., LISA-13B achieves only 5.9% gIoU), highlighting a critical gap in their part grounding abilities. We note that existing segmentation-enabled LMMs (segmenting LMMs) have two key architectural shortcomings: they use special [SEG] tokens not seen during pretraining which induce distribution shift, and they discard predicted segmentations instead of using past predictions to guide future ones. To address these deficiencies, we train several part-centric LMMs and propose PLUM, a novel segmenting LMM that utilizes span tagging instead of segmentation tokens and that conditions on prior predictions in a feedback loop. We find that pretrained PLUM dominates existing segmenting LMMs on reasoning segmentation, VQA, and visual hallucination benchmarks. In addition, PLUM finetuned on our proposed Explanatory Part Segmentation task is competitive with segmenting LMMs trained on significantly more segmentation data. Our work opens up new avenues towards enabling fine-grained, grounded visual understanding in LMMs.
|
https://openreview.net/forum?id=yjLew3Nd7z
|
Main
|
Spotlight
|
yjLew3Nd7z
|
CLAWS:Creativity detection for LLM-generated solutions using Attention Window of Sections
|
[
"Keuntae Kim",
"Eunhye Jeong",
"Sehyeon Lee",
"Seohee Yoon",
"Yong Suk Choi"
] |
Recent advances in enhancing the reasoning ability of Large Language Models (LLMs) have been remarkably successful. LLMs trained with Reinforcement Learning (RL) for reasoning demonstrate strong performance in challenging tasks such as mathematics and coding, even with relatively small model sizes. However, despite these impressive improvements in task accuracy, the assessment of creativity in LLM generations has been largely overlooked in reasoning tasks, in contrast to writing tasks. The lack of research on creativity assessment in reasoning primarily stems from two challenges: (1) the difficulty of defining the range of creativity, and (2) the necessity of human evaluation in the assessment process. To address these challenges, we propose CLAWS, a novel method that defines and classifies mathematical solutions into Typical, Creative, and Hallucinated categories without human evaluation, by leveraging attention weights across prompt sections and output. CLAWS outperforms five existing white-box detection methods—Perplexity, Logit Entropy, Window Entropy, Hidden Score, and Attention Score—on five 7–8B math RL models (DeepSeek, Qwen, Mathstral, OpenMath2, and Oreal). We validate CLAWS on 4,545 math problems collected from 181 math contests (A(J)HSME, AMC, AIME). Our code is available at https://github.com/kkt94/CLAWS.
|
https://openreview.net/forum?id=yiSoT2pHfk
|
Main
|
Poster
|
yiSoT2pHfk
|
Bridging Equivariant GNNs and Spherical CNNs for Structured Physical Domains
|
[
"Colin Kohler",
"Purvik Patel",
"Nathan Vaska",
"Justin Goodwin",
"Matthew C. Jones",
"Robert Platt",
"Rajmonda S. Caceres",
"Robin Walters"
] |
Many modeling tasks from disparate domains can be framed the same way, computing spherical signals from geometric inputs, for example, computing the radar response of different objects or navigating through an environment. This paper introduces G2Sphere, a general method for mapping object geometries to spherical signals. G2Sphere operates entirely in Fourier space, encoding geometric structure into latent Fourier features using equivariant neural networks and outputting the Fourier coefficients of the continuous target signal, which can be evaluated at any resolution. By utilizing a hybrid GNN-spherical CNN architecture, our method achieves much higher frequency output signal than comparable equivariant GNNs and avoids hand-engineered geometry features used previously by purely spherical methods. We perform experiments on various challenging domains including radar response modeling, aerodynamic drag prediction, and policy learning for manipulation and navigation. We find that G2Sphere outperforms competitive baselines in terms of accuracy and inference time, and we demonstrate that equivariance and Fourier features lead to improved sample efficiency and generalization.The source code is available at: https://github.com/ColinKohler/geometry2sphere.
|
https://openreview.net/forum?id=yh4DPshiWZ
|
Main
|
Poster
|
yh4DPshiWZ
|
Prior-Guided Flow Matching for Target-Aware Molecule Design with Learnable Atom Number
|
[
"Jingyuan Zhou",
"Hao Qian",
"Shikui Tu",
"Lei Xu"
] |
Structure-based drug design (SBDD), aiming to generate 3D molecules with high binding affinity toward target proteins, is a vital approach in novel drug discovery. Although recent generative models have shown great potential, they suffer from unstable probability dynamics and mismatch between generated molecule size and the protein pockets geometry, resulting in inconsistent quality and off-target effects. We propose PAFlow, a novel target-aware molecular generation model featuring prior interaction guidance and a learnable atom number predictor. PAFlow adopts the efficient flow matching framework to model the generation process and constructs a new form of conditional flow matching for discrete atom types. A protein–ligand interaction predictor is incorporated to guide the vector field toward higher-affinity regions during generation, while an atom number predictor based on protein pocket information is designed to better align generated molecule size with target geometry. Extensive experiments on the CrossDocked2020 benchmark show that PAFlow achieves a new state-of-the-art in binding affinity (up to -8.31 Avg. Vina Score), simultaneously maintains favorable molecular properties.
|
https://openreview.net/forum?id=yh1t1yFtXG
|
Main
|
Poster
|
yh1t1yFtXG
|
IGD: Token Decisiveness Modeling via Information Gain in LLMs for Personalized Recommendation
|
[
"Zijie Lin",
"Yang Zhang",
"Xiaoyan Zhao",
"Fengbin ZHU",
"Fuli Feng",
"Tat-Seng Chua"
] |
Large Language Models (LLMs) have shown strong potential for recommendation by framing item prediction as a token-by-token language generation task. However, existing methods treat all item tokens equally, simply pursuing likelihood maximization during both optimization and decoding. This overlooks crucial token-level differences in decisiveness—many tokens contribute little to item discrimination yet can dominate optimization or decoding.
To quantify token decisiveness, we propose a novel perspective that models item generation as a decision process, measuring token decisiveness by the Information Gain (IG) each token provides in reducing uncertainty about the generated item. Our empirical analysis reveals that most tokens have low IG but often correspond to high logits, disproportionately influencing training loss and decoding, which may impair model performance.
Building on these insights, we introduce an Information Gain-based Decisiveness-aware Token handling (IGD) strategy that integrates token decisiveness into both tuning and decoding. Specifically, IGD downweights low-IG tokens during tuning and rebalances decoding to emphasize tokens with high IG. In this way, IGD moves beyond pure likelihood maximization, effectively prioritizing high-decisiveness tokens. Extensive experiments on four benchmark datasets with two LLM backbones demonstrate that IGD consistently improves recommendation accuracy, achieving significant gains on widely used ranking metrics compared to strong baselines. Our codes are available at \url{https://github.com/ZJLin2oo1/IGD}.
|
https://openreview.net/forum?id=ygNaCTGUwJ
|
Main
|
Poster
|
ygNaCTGUwJ
|
CMoB: Modality Valuation via Causal Effect for Balanced Multimodal Learning
|
[
"Jun Wang",
"Fuyuan CAO",
"ZhixinXue",
"Xingwang Zhao",
"Jiye Liang"
] |
Existing early and late fusion frameworks in multimodal learning are confronted with the fundamental challenge of modality imbalance, wherein disparities in representational capacities induce inter-modal competition during training. Current research methodologies primarily rely on modality-level contribution assessments to measure gaps in representational capabilities and enhance poorly learned modalities, overlooking the dynamic variations of modality contributions across individual samples. To address this, we propose a Causal-aware Modality valuation approach for Balanced multimodal learning (CMoB). We define a benefit function based on Shannon's theory of informational uncertainty to evaluate the changes in the importance of samples across different stages of multimodal training. Inspired by human cognitive science, we propose a causal-aware modality contribution quantification method from a causal perspective to capture fine-grained changes in modality contribution degrees within samples. In the iterative training of multimodal learning, we develop targeted modal enhancement strategies that dynamically select and optimize modalities based on real-time evaluation of their contribution variations across training samples. Our method enhances the discriminative ability of key modalities and the learning capacity of weak modalities while achieving fine-grained balance in multimodal learning. Extensive experiments on benchmark multimodal datasets and multimodal frameworks demonstrate the superiority of our CMoB approach for balanced multimodal learning.
|
https://openreview.net/forum?id=ygHWfrwFmO
|
Main
|
Poster
|
ygHWfrwFmO
|
JailBound: Jailbreaking Internal Safety Boundaries of Vision-Language Models
|
[
"Jiaxin Song",
"Yixu Wang",
"Jie Li",
"Xuan Tong",
"rui yu",
"Yan Teng",
"Xingjun Ma",
"Yingchun Wang"
] |
Vision-Language Models (VLMs) exhibit impressive performance, yet the integration of powerful vision encoders has significantly broadened their attack surface, rendering them increasingly susceptible to jailbreak attacks. However, lacking well-defined attack objectives, existing jailbreak methods often struggle with gradient-based strategies prone to local optima and lacking precise directional guidance, and typically decouple visual and textual modalities, thereby limiting their effectiveness by neglecting crucial cross-modal interactions. Inspired by the Eliciting Latent Knowledge (ELK) framework, we posit that VLMs encode safety-relevant information within their internal fusion-layer representations, revealing an implicit safety decision boundary in the latent space. This motivates exploiting boundary to steer model behavior. Accordingly, we propose \textbf{JailBound}, a novel latent space jailbreak framework comprising two stages: (1) \textbf{Safety Boundary Probing}, which addresses the guidance issue by approximating decision boundary within fusion layer's latent space, thereby identifying optimal perturbation directions towards the target region; and (2) \textbf{Safety Boundary Crossing}, which overcomes the limitations of decoupled approaches by jointly optimizing adversarial perturbations across both image and text inputs. This latter stage employs an innovative mechanism to steer the model's internal state towards policy-violating outputs while maintaining cross-modal semantic consistency. Extensive experiments on six diverse VLMs demonstrate JailBound's efficacy, achieves 94.32\% white-box and 67.28\% black-box attack success averagely, which are 6.17\% and 21.13\% higher than SOTA methods, respectively. Our findings expose a overlooked safety risk in VLMs and highlight the urgent need for more robust defenses. \textcolor{red}{Warning: This paper contains potentially sensitive, harmful and offensive content.}
|
https://openreview.net/forum?id=yg1yfaKolw
|
Main
|
Poster
|
yg1yfaKolw
|
Confusion-Driven Self-Supervised Progressively Weighted Ensemble Learning for Non-Exemplar Class Incremental Learning
|
[
"Kai Hu",
"Zhang Yu",
"Yuan Zhang",
"Zhineng Chen",
"Xieping Gao"
] |
Non-exemplar class incremental learning (NECIL) aims to continuously assimilate new knowledge while retaining previously acquired knowledge in scenarios where prior examples are unavailable. A prevalent strategy within NECIL mitigates knowledge forgetting by freezing the feature extractor after training on the initial task. However, this freezing mechanism does not provide explicit training to differentiate between new and old classes, resulting in overlapping feature representations. To address this challenge, we propose a **C**onfusion-driven se**L**f-supervised pr**O**gressi**V**ely weighted **E**nsemble lea**R**ning (*CLOVER*) framework for NECIL. Firstly, we introduce a confusion-driven self-supervised learning approach that enhances representation extraction by guiding the model to distinguish between highly confusable classes, thereby reducing class representation overlap. Secondly, we develop a progressively weighted ensemble learning method that gradually adjusts weights to integrate diverse knowledge more effectively, further minimizing representation overlap. Finally, extensive experiments demonstrate that our proposed method achieves state-of-the-art results on the CIFAR100, TinyImageNet, and ImageNet-Subset NECIL benchmarks.
|
https://openreview.net/forum?id=yflq8Bhjrw
|
Main
|
Poster
|
yflq8Bhjrw
|
Beyond the 80/20 Rule: High-Entropy Minority Tokens Drive Effective Reinforcement Learning for LLM Reasoning
|
[
"Shenzhi Wang",
"Le Yu",
"Chang Gao",
"Chujie Zheng",
"Shixuan Liu",
"Rui Lu",
"Kai Dang",
"Xiong-Hui Chen",
"Jianxin Yang",
"Zhenru Zhang",
"Yuqiong Liu",
"An Yang",
"Andrew Zhao",
"Yang Yue",
"Shiji Song",
"Bowen Yu",
"Gao Huang",
"Junyang Lin"
] |
Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a powerful approach to enhancing the reasoning capabilities of Large Language Models (LLMs), yet its underlying mechanisms remain insufficiently understood. In this work, we undertake a pioneering exploration of RLVR through the novel perspective of token entropy patterns, comprehensively analyzing how different tokens influence reasoning performance. By examining token entropy patterns in Chain-of-Thought (CoT) reasoning, we observe that only a small fraction (approximately 20\%) of tokens exhibit high entropy, and these tokens semantically act as critical forks that steer the model toward diverse reasoning pathways. We further demonstrate that moderately increasing the entropy of these high-entropy tokens via decoding temperature adjustments leads to improved performance, quantitatively confirming their role as decision points in reasoning. We ultimately refine RLVR by restricting policy gradient updates to these forking tokens. Despite utilizing only 20\% of tokens, our approach achieves comparable performance to full-gradient updates on the Qwen3-8B base model. Moreover, it demonstrates remarkable improvements on the larger Qwen3-32B base model, boosting AIME'25 scores by 11.04 and AIME'24 scores by 7.71. In contrast, training exclusively on the 80\% lowest-entropy tokens leads to a marked decline in performance. These findings indicate that the efficacy of RLVR primarily arises from optimizing the high-entropy tokens that dictate key reasoning directions. Collectively, our results suggest promising avenues for optimizing RLVR algorithms by strategically leveraging the potential of these high-entropy minority tokens to further enhance the reasoning abilities of LLMs.
|
https://openreview.net/forum?id=yfcpdY4gMP
|
Main
|
Poster
|
yfcpdY4gMP
|
Towards a Golden Classifier-Free Guidance Path via Foresight Fixed Point Iterations
|
[
"Kaibo Wang",
"Jianda Mao",
"Tong Wu",
"Yang Xiang"
] |
Classifier-Free Guidance (CFG) is an essential component of text-to-image diffusion models, and understanding and advancing its operational mechanisms remains a central focus of research. Existing approaches stem from divergent theoretical interpretations, thereby limiting the design space and obscuring key design choices. To address this, we propose a unified perspective that reframes conditional guidance as fixed point iterations, seeking to identify a golden path where latents produce consistent outputs under both conditional and unconditional generation. We demonstrate that CFG and its variants constitute a special case of single-step short-interval iteration, which is theoretically proven to exhibit inefficiency. To this end, we introduce Foresight Guidance (FSG), which prioritizes solving longer-interval subproblems in early diffusion stages with increased iterations. Extensive experiments across diverse datasets and model architectures validate the superiority of FSG over state-of-the-art methods in both image quality and computational efficiency. Our work offers novel perspectives for conditional guidance and unlocks the potential of adaptive design.
|
https://openreview.net/forum?id=yf8O4xEB4T
|
Main
|
Spotlight
|
yf8O4xEB4T
|
Enforcing convex constraints in Graph Neural Networks
|
[
"Ahmed Rashwan",
"Keith Briggs",
"Chris Budd",
"Lisa Maria Kreusser"
] |
Many machine learning applications require outputs that satisfy complex, dynamic constraints. This task is particularly challenging in Graph Neural Network models due to the variable output sizes of graph-structured data. In this paper, we introduce ProjNet, a Graph Neural Network framework which satisfies input-dependant constraints. ProjNet combines a sparse vector clipping method with the Component-Averaged Dykstra (CAD) algorithm, an iterative scheme for solving the best-approximation problem. We establish a convergence result for CAD and develop a GPU-accelerated implementation capable of handling large-scale inputs efficiently. To enable end-to-end training, we introduce a surrogate gradient for CAD that is both computationally efficient and better suited for optimization than the exact gradient. We validate ProjNet on four classes of constrained optimisation problems: linear programming, two classes of non-convex quadratic programs, and radio transmit power optimization, demonstrating its effectiveness across diverse problem settings.
|
https://openreview.net/forum?id=yeyaKpaufr
|
Main
|
Poster
|
yeyaKpaufr
|
Probing Neural Combinatorial Optimization Models
|
[
"Zhiqin Zhang",
"Yining Ma",
"Zhiguang Cao",
"Hoong Chuin Lau"
] |
Neural combinatorial optimization (NCO) has achieved remarkable performance, yet its learned model representations and decision rationale remain a black box. This impedes both academic research and practical deployment, since researchers and stakeholders require deeper insights into NCO models. In this paper, we take the first critical step towards interpreting NCO models by investigating their representations through various probing tasks. Moreover, we introduce a novel probing tool named Coefficient Significance Probing (CS-Probing) to enable deeper analysis of NCO representations by examining the coefficients and statistical significance during probing. Extensive experiments and analysis reveal that NCO models encode low-level information essential for solution construction, while capturing high-level knowledge to facilitate better decisions. Using CS-Probing, we find that prevalent NCO models impose varying inductive biases on their learned representations, uncover direct evidence related to model generalization, and identify key embedding dimensions associated with specific knowledge. These insights can be potentially translated into practice, for example, with minor code modifications, we improve the generalization of the analyzed model. Our work represents a first systematic attempt to interpret black-box NCO models, showcasing probing as a promising tool for analyzing their internal mechanisms and revealing insights for the NCO community. The source code is publicly available.
|
https://openreview.net/forum?id=ycnc9aLnQu
|
Main
|
Spotlight
|
ycnc9aLnQu
|
Segment then Splat: Unified 3D Open-Vocabulary Segmentation via Gaussian Splatting
|
[
"Yiren Lu",
"Yunlai Zhou",
"Yiran Qiao",
"Chaoda Song",
"Tuo Liang",
"Jing Ma",
"Huan Wang",
"Yu Yin"
] |
Open-vocabulary querying in 3D space is crucial for enabling more intelligent perception in applications such as robotics, autonomous systems, and augmented reality. However, most existing methods rely on 2D pixel-level parsing, leading to multi-view inconsistencies and poor 3D object retrieval. Moreover, they are limited to static scenes and struggle with dynamic scenes due to the complexities of motion modeling.
In this paper, we propose Segment then Splat, a 3D-aware open vocabulary segmentation approach for both static and dynamic scenes based on Gaussian Splatting.
Segment then Splat reverses the long established approach of "segmentation after reconstruction'' by dividing Gaussians into distinct object sets before reconstruction. Once reconstruction is complete, the scene is naturally segmented into individual objects, achieving true 3D segmentation. This design eliminates both geometric and semantic ambiguities, as well as Gaussian–object misalignment issues in dynamic scenes. It also accelerates the optimization process, as it eliminates the need for learning a separate language field.
After optimization, a CLIP embedding is assigned to each object to enable open-vocabulary querying. Extensive experiments one various datasets demonstrate the effectiveness of our proposed method in both static and dynamic scenarios.
|
https://openreview.net/forum?id=ycPVp0577R
|
Main
|
Poster
|
ycPVp0577R
|
MetaDefense: Defending Fine-tuning based Jailbreak Attack Before and During Generation
|
[
"Weisen Jiang",
"Sinno Jialin Pan"
] |
This paper introduces MetaDefense, a novel framework for defending against finetuning-based jailbreak attacks in large language models (LLMs).
We observe that existing defense mechanisms fail to generalize to harmful queries disguised by unseen attack templates, despite LLMs being capable of distinguishing disguised harmful queries in the embedding space.
Based on these insights, we propose a two-stage defense approach:
(i) pre-generation defense that detects harmful queries before response generation begins, and (ii) mid-generation defense that monitors partial responses during generation to prevent outputting more harmful content.
Our MetaDefense trains the LLM to predict the harmfulness of both queries and partial responses using specialized prompts, enabling early termination of potentially harmful interactions.
Extensive experiments across multiple LLM architectures (LLaMA-2-7B, Qwen-2.5-3B-Instruct, and LLaMA-3.2-3B-Instruct) demonstrate that MetaDefense significantly outperforms existing defense mechanisms, achieving robust defense against harmful queries with seen and unseen attack templates while maintaining competitive performance on benign tasks.
Code is available at [https://github.com/ws-jiang/MetaDefense](https://github.com/ws-jiang/MetaDefense).
|
https://openreview.net/forum?id=ycMpNwzUAA
|
Main
|
Poster
|
ycMpNwzUAA
|
Coresets for Clustering Under Stochastic Noise
|
[
"Lingxiao Huang",
"Zhize Li",
"Nisheeth K. Vishnoi",
"Runkai Yang",
"Haoyu Zhao"
] |
We study the problem of constructing coresets for $(k, z)$-clustering when the input dataset is corrupted by stochastic noise drawn from a known distribution. In this setting, evaluating the quality of a coreset is inherently challenging, as the true underlying dataset is unobserved. To address this, we investigate coreset construction using surrogate error metrics that are tractable and provably related to the true clustering cost. We analyze a traditional metric from prior work and introduce a new error metric that more closely aligns with the true cost. Although our metric is defined independent of the noise distribution, it enables approximation guarantees that scale with the noise level. We design a coreset construction algorithm based on this metric and show that, under mild assumptions on the data and noise, enforcing an $\varepsilon$-bound under our metric yields smaller coresets and tighter guarantees on the true clustering cost than those obtained via classical metrics. In particular, we prove that the coreset size can improve by a factor of up to $\mathrm{poly}(k)$, where $n$ is the dataset size. Experiments on real-world datasets support our theoretical findings and demonstrate the practical advantages of our approach.
|
https://openreview.net/forum?id=ycCi4SkzPH
|
Main
|
Poster
|
ycCi4SkzPH
|
Beyond Pairwise Connections: Extracting High-Order Functional Brain Network Structures under Global Constraints
|
[
"Ling Zhan",
"Junjie Huang",
"Xiaoyao Yu",
"Wenyu Chen",
"Tao Jia"
] |
Functional brain network (FBN) modeling often relies on local pairwise interactions, whose limitation in capturing high-order dependencies is theoretically analyzed in this paper. Meanwhile, the computational burden and heuristic nature of current hypergraph modeling approaches hinder end-to-end learning of FBN structures directly from data distributions. To address this, we propose to extract high-order FBN structures under global constraints, and implement this as a Global Constraints oriented Multi-resolution (GCM) FBN structure learning framework. It incorporates 4 types of global constraint (signal synchronization, subject identity, expected edge numbers, and data labels) to enable learning FBN structures for 4 distinct levels (sample/subject/group/project) of modeling resolution. Experimental results demonstrate that GCM achieves up to a 30.6% improvement in relative accuracy and a 96.3% reduction in computational time across 5 datasets and 2 task settings, compared to 9 baselines and 10 state-of-the-art methods. Extensive experiments validate the contributions of individual components and highlight the interpretability of GCM. This work offers a novel perspective on FBN structure learning and provides a foundation for interdisciplinary applications in cognitive neuroscience. Code is publicly available on https://github.com/lzhan94swu/GCM.
|
https://openreview.net/forum?id=ybH0avRV4n
|
Main
|
Poster
|
ybH0avRV4n
|
Transferable Black-Box One-Shot Forging of Watermarks via Image Preference Models
|
[
"Tomas Soucek",
"Sylvestre-Alvise Rebuffi",
"Pierre Fernandez",
"Nikola Jovanović",
"Hady Elsahar",
"Valeriu Lacatusu",
"Tuan A. Tran",
"Alexandre Mourachko"
] |
Recent years have seen a surge in interest in digital content watermarking techniques, driven by the proliferation of generative models and increased legal pressure. With an ever-growing percentage of AI-generated content available online, watermarking plays an increasingly important role in ensuring content authenticity and attribution at scale. There have been many works assessing the robustness of watermarking to removal attacks, yet, watermark forging, the scenario when a watermark is stolen from genuine content and applied to malicious content, remains underexplored. In this work, we investigate watermark forging in the context of widely used post-hoc image watermarking. Our contributions are as follows. First, we introduce a preference model to assess whether an image is watermarked. The model is trained using a ranking loss on purely procedurally generated images without any need for real watermarks. Second, we demonstrate the model's capability to remove and forge watermarks by optimizing the input image through backpropagation. This technique requires only a single watermarked image and works without knowledge of the watermarking model, making our attack much simpler and more practical than attacks introduced in related work. Third, we evaluate our proposed method on a variety of post-hoc image watermarking models, demonstrating that our approach can effectively forge watermarks, questioning the security of current watermarking approaches. Our code and further resources are publicly available.
|
https://openreview.net/forum?id=yb5JOOmfxA
|
Main
|
Spotlight
|
yb5JOOmfxA
|
Temporal Representation Alignment: Successor Features Enable Emergent Compositionality in Robot Instruction Following
|
[
"Vivek Myers",
"Bill Zheng",
"Anca Dragan",
"Kuan Fang",
"Sergey Levine"
] |
Effective task representations should facilitate compositionality, such
that after learning a variety of basic tasks, an agent can perform
compound tasks consisting of multiple steps simply by composing the
representations of the constituent steps together. While this is
conceptually simple and appealing, it is not clear how to automatically
learn representations that enable this sort of compositionality. We show
that learning to associate the representations of current and future
states with a temporal alignment loss can improve compositional
generalization, even in the absence of any explicit subtask planning or
reinforcement learning. We evaluate our approach across diverse robotic
manipulation tasks as well as in simulation, showing substantial
improvements for tasks specified with either language or goal images.
|
https://openreview.net/forum?id=yaS3JWQRQ6
|
Main
|
Poster
|
yaS3JWQRQ6
|
Optimizing Retrieval for RAG via Reinforced Contrastive Learning
|
[
"Jiawei Zhou",
"Lei Chen"
] |
As retrieval-augmented generation (RAG) becomes increasingly widespread, the role of information retrieval (IR) is shifting from retrieving information for human users to retrieving contextual knowledge for artificial intelligence (AI) systems, where relevance becomes difficult to define or annotate beforehand. To address this challenge, we propose R3, a Retrieval framework optimized for RAG through trial-and-feedback Reinforced contrastive learning. Unlike prior approaches that rely on annotated or synthetic data for supervised fine-tuning, R3 enables the retriever to dynamically explore and optimize relevance within the RAG environment. During training, the retrieved results interact with the environment to produce contrastive signals that automatically guide the retriever’s self-improvement. Extensive experiments across diverse tasks demonstrate that R3 improves RAG performance by 5.2% over the original retriever and surpasses state-of-the-art retrievers by 4.9%, while achieving comparable results to LLM-augmented retrieval and RAG systems built on post-trained or instruction-tuned LLMs. It is both efficient and practical, requiring only 4 GPUs and completing training within a single day.
|
https://openreview.net/forum?id=yZzhaHygWW
|
Main
|
Poster
|
yZzhaHygWW
|
TRiCo: Triadic Game-Theoretic Co-Training for Robust Semi-Supervised Learning
|
[
"Hongyang He",
"Xinyuan Song",
"Yangfan He",
"Zeyu Zhang",
"Yanshu Li",
"Haochen You",
"Lifan Sun",
"Wenqiao Zhang"
] |
We introduce TRiCo, a novel triadic game-theoretic co-training framework that rethinks the structure of semi-supervised learning by incorporating a teacher, two students, and an adversarial generator into a unified training paradigm. Unlike existing co-training or teacher-student approaches, TRiCo formulates SSL as a structured interaction among three roles: (i) two student classifiers trained on frozen, complementary representations, (ii) a meta-learned teacher that adaptively regulates pseudo-label selection and loss balancing via validation-based feedback, and (iii) a non-parametric generator that perturbs embeddings to uncover decision boundary weaknesses. Pseudo-labels are selected based on mutual information rather than confidence, providing a more robust measure of epistemic uncertainty. This triadic interaction is formalized as a Stackelberg game, where the teacher leads strategy optimization and students follow under adversarial perturbations. By addressing key limitations in existing SSL frameworks—such as static view interactions, unreliable pseudo-labels, and lack of hard sample modeling—TRiCo provides a principled and generalizable solution. Extensive experiments on CIFAR-10, SVHN, STL-10, and ImageNet demonstrate that TRiCo consistently achieves state-of-the-art performance in low-label regimes, while remaining architecture-agnostic and compatible with frozen vision backbones.
|
https://openreview.net/forum?id=yZy6f3icew
|
Main
|
Poster
|
yZy6f3icew
|
FuncGenFoil: Airfoil Generation and Editing Model in Function Space
|
[
"Jinouwen Zhang",
"Junjie Ren",
"Qianhong Ma",
"Jianyu Wu",
"Aobo Yang",
"Yan Lu",
"Lu Chen",
"Hairun Xie",
"Jing Wang",
"Miao Zhang",
"Wanli Ouyang",
"SHIXIANG TANG"
] |
Aircraft manufacturing is the jewel in the crown of industry, in which generating high-fidelity airfoil geometries with controllable and editable representations remains a fundamental challenge. Existing deep learning methods, which typically rely on predefined parametric representations (e.g., Bézier curves) or discrete point sets, face an inherent trade-off between expressive power and resolution adaptability.
To tackle this challenge, we introduce FuncGenFoil, a novel function-space generative model that directly reconstructs airfoil geometries as function curves. Our method inherits the advantages of arbitrary-resolution sampling and smoothness from parametric functions, as well as the strong expressiveness of discrete point-based representations.
Empirical evaluations demonstrate that FuncGenFoil improves upon state-of-the-art methods in airfoil generation, achieving a relative 74.4% reduction in label error and a 23.2% increase in diversity on the AF-200K dataset. Our results highlight the advantages of function-space modeling for aerodynamic shape optimization, offering a powerful and flexible framework for high-fidelity airfoil design.
|
https://openreview.net/forum?id=yXitkQJmpj
|
Main
|
Poster
|
yXitkQJmpj
|
Reviving DSP for Advanced Theorem Proving in the Era of Reasoning Models
|
[
"Chenrui Cao",
"Liangcheng Song",
"Zenan Li",
"Xinyi Le",
"Xian Zhang",
"HUI XUE",
"Fan Yang"
] |
Recent advancements, such as DeepSeek-Prover-V2-671B and Kimina-Prover-Preview-72B, demonstrate a prevailing trend in leveraging reinforcement learning (RL)-based large-scale training for automated theorem proving. Surprisingly, we discover that even without any training, careful neuro-symbolic coordination of existing off-the-shelf reasoning models and tactic step provers can achieve comparable performance. This paper introduces DSP+, an improved version of the Draft, Sketch, and Prove framework, featuring a fine-grained and integrated neuro-symbolic enhancement for each phase: (1) In the draft phase, we prompt reasoning models to generate concise natural-language subgoals to benefit the sketch phase, removing thinking tokens and references to human-written proofs; (2) In the sketch phase, subgoals are autoformalized with hypotheses to benefit the proving phase, and sketch lines containing syntactic errors are masked according to predefined rules; (3) In the proving phase, we tightly integrate symbolic search methods like Aesop with step provers to establish proofs for the sketch subgoals. Experimental results show that, without any additional model training or fine-tuning, DSP+ solves 80.7%, 32.8%, and 24 out of 644 problems from miniF2F, ProofNet, and PutnamBench, respectively, while requiring fewer budgets compared to state-of-the-arts. DSP+ proves imo_2019_p1, an IMO problem in miniF2F that is not solved by any prior work. Additionally, DSP+ generates proof patterns comprehensible by human experts, facilitating the identification of formalization errors; For example, eight wrongly formalized statements in miniF2F are discovered. Our results highlight the potential of classical reasoning patterns besides the RL-based training. All components will be open-sourced.
|
https://openreview.net/forum?id=yTFJmGFsEy
|
Main
|
Poster
|
yTFJmGFsEy
|
Holistic Large-Scale Scene Reconstruction via Mixed Gaussian Splatting
|
[
"Chuandong Liu",
"Huijiao Wang",
"Lei YU",
"Gui-Song Xia"
] |
Recent advances in 3D Gaussian Splatting have shown remarkable potential for novel view synthesis. However, most existing large-scale scene reconstruction methods rely on the divide-and-conquer paradigm, which often leads to the loss of global scene information and requires complex parameter tuning due to scene partitioning and local optimization. To address these limitations, we propose MixGS, a novel holistic optimization framework for large-scale 3D scene reconstruction. MixGS models the entire scene holistically by integrating camera pose and Gaussian attributes into a view-aware representation, which is decoded into fine-detailed Gaussians. Furthermore, a novel mixing operation combines decoded and original Gaussians to jointly preserve global coherence and local fidelity. Extensive experiments on large-scale scenes demonstrate that MixGS achieves state-of-the-art rendering quality and competitive speed, while significantly reducing computational requirements, enabling large-scale scene reconstruction training on a single 24GB VRAM GPU.
|
https://openreview.net/forum?id=yT8v2QFv5w
|
Main
|
Poster
|
yT8v2QFv5w
|
Execution Guided Line-by-Line Code Generation
|
[
"Boaz Lavon",
"Shahar Katz",
"Lior Wolf"
] |
We present a novel approach to neural code generation that incorporates real-time execution signals into the language model generation process. While large language models (LLMs) have demonstrated impressive code generation capabilities, they typically do not utilize execution feedback during inference, a critical signal that human programmers regularly leverage. Our method, Execution-Guided Classifier-Free Guidance EG-CFG, dynamically incorporates execution signals as the model generates code, providing line-by-line feedback that guides the generation process toward executable solutions.
EG-CFG employs a multi-stage process: first, we conduct beam search to sample candidate program completions for each line; second, we extract execution signals by executing these candidates against test cases; and finally, we incorporate these signals into the prompt during generation. By maintaining consistent signals across tokens within the same line and refreshing signals at line boundaries, our approach provides coherent guidance while preserving syntactic structure. Moreover, the method naturally supports native parallelism at the task level in which multiple agents operate in parallel, exploring diverse reasoning paths and collectively generating a broad set of candidate solutions.
Our experiments across diverse coding tasks demonstrate that EG-CFG significantly improves code generation performance compared to standard approaches, achieving state-of-the-art results across various levels of complexity, from foundational problems to challenging competitive programming and data science tasks.
|
https://openreview.net/forum?id=ySFDPoiANu
|
Main
|
Poster
|
ySFDPoiANu
|
SWE-SQL: Illuminating LLM Pathways to Solve User SQL Issues in Real-World Applications
|
[
"Jinyang Li",
"Xiaolong Li",
"Ge Qu",
"Per Jacobsson",
"Bowen Qin",
"Binyuan Hui",
"Shuzheng Si",
"Nan Huo",
"Xiaohan Xu",
"Yue Zhang",
"Ziwei Tang",
"Yuanshuai Li",
"Florensia Widjaja",
"Xintong Zhu",
"Feige Zhou",
"Yongfeng Huang",
"Yannis Papakonstantinou",
"Fatma Ozcan",
"Chenhao Ma",
"Reynold Cheng"
] |
Resolution of complex SQL issues persists as a significant bottleneck in real-world database applications. Current Large Language Models (LLMs), while adept at text-to-SQL translation, have not been rigorously evaluated on the more challenging task of debugging on SQL issues. In order to address this gap, we introduce **BIRD-CRITIC**, a new SQL issue debugging benchmark comprising 530 carefully curated PostgreSQL tasks (**BIRD-CRITIC-PG**) and 570 multi-dialect tasks (**BIRD-CRITIC-Multi**), which are distilled from authentic user issues and replayed within new environments to facilitate rigorous and contamination-free evaluation. Baseline evaluations on BIRD-CRITIC underscore the task's complexity, with the leading reasoning model **O3-Mini** achieving only 38.87% success rate on **BIRD-CRITIC-PG** and 33.33% on **BIRD-CRITIC-Multi**. Meanwhile, realizing open-source models for database tasks is crucial which can empower local development while safeguarding data privacy. Therefore, we present **Six-Gym** (**S**ql-f**IX**-Gym), a training environment for elevating the capabilities of open-source models specifically for SQL issue debugging. This environment leverages **SQL-Rewind** strategy, which automatically generates executable issue-solution datasets by reverse-engineering issues from verified SQLs. However, popular trajectory-based fine-tuning methods do not explore substantial supervisory signals. We further propose *f*-Plan Boosting, which extracts high-level debugging plans automatically from SQL solutions, enabling the teacher LLMs to harvest and produce 73.7% more successful trajectories for training. We integrate these components into an open-source agent, **BIRD-Fixer**. Based on Qwen-2.5-Coder-14B, **BIRD-Fixer** raises its success rate to 38.11% on **BIRD-CRITIC-PG** and 29.65% on **BIRD-CRITIC-Multi**, surpassing many leading proprietary models such as Claude-3.7-Sonnet and GPT-4.1, marking a significant step toward democratizing sophisticated SQL-debugging capabilities for both research and industry.
|
https://openreview.net/forum?id=yRxXTdElLv
|
Main
|
Poster
|
yRxXTdElLv
|
Evaluating the Inductive Abilities of Large Language Models: Why Chain-of-Thought Reasoning Sometimes Hurts More Than Helps
|
[
"Haibo Jin",
"Peiyan Zhang",
"Man Luo",
"Haohan Wang"
] |
Large Language Models (LLMs) have shown remarkable progress across domains, yet their ability to perform inductive reasoning—inferring latent rules from sparse examples—remains limited.
It is often assumed that chain-of-thought (CoT) prompting, as used in Large Reasoning Models (LRMs), enhances such reasoning.
We investigate this assumption with creating four controlled, diagnostic game-based tasks—chess, Texas Hold’em, dice games, and blackjack—with hidden human-defined rules.
We find that CoT reasoning can degrade inductive performance, with LRMs often underperforming their non-reasoning counterparts.
To explain this, we present a theoretical framework that reveals how reasoning steps can amplify error through three failure modes: incorrect sub-task decomposition, incorrect sub-task solving, and incorrect final answer summarization.
Based on our theoretical and empirical analysis, we introduce structured interventions that adapt CoT generation according to our identified failure types. These interventions improve inductive accuracy without retraining. Our findings suggest that effective (CoT) reasoning depends not only on taking more steps but also on ensuring those steps are well-structured.
|
https://openreview.net/forum?id=yRxX01oRIi
|
Main
|
Poster
|
yRxX01oRIi
|
DNA-DetectLLM: Unveiling AI-Generated Text via a DNA-Inspired Mutation-Repair Paradigm
|
[
"Xiaowei Zhu",
"Yubing Ren",
"Fang Fang",
"Qingfeng Tan",
"Shi Wang",
"Yanan Cao"
] |
The rapid advancement of large language models (LLMs) has blurred the line between AI-generated and human-written text. This progress brings societal risks such as misinformation, authorship ambiguity, and intellectual property concerns, highlighting the urgent need for reliable AI-generated text detection methods. However, recent advances in generative language modeling have resulted in significant overlap between the feature distributions of human-written and AI-generated text, blurring classification boundaries and making accurate detection increasingly challenging. To address the above challenges, we propose a DNA-inspired perspective, leveraging a repair-based process to directly and interpretably capture the intrinsic differences between human-written and AI-generated text. Building on this perspective, we introduce **DNA-DetectLLM**, a zero-shot detection method for distinguishing AI-generated and human-written text. The method constructs an ideal AI-generated sequence for each input, iteratively repairs non-optimal tokens, and quantifies the cumulative repair effort as an interpretable detection signal. Empirical evaluations demonstrate that our method achieves state-of-the-art detection performance and exhibits strong robustness against various adversarial attacks and input lengths. Specifically, DNA-DetectLLM achieves relative improvements of **5.55\%** in AUROC and **2.08\%** in F1 score across multiple public benchmark datasets. Code and data are available at https://github.com/Xiaoweizhu57/DNA-DetectLLM.
|
https://openreview.net/forum?id=yQoHUijSHx
|
Main
|
Spotlight
|
yQoHUijSHx
|
Fair Cooperation in Mixed-Motive Games via Conflict-Aware Gradient Adjustment
|
[
"Woojun Kim",
"Katia P. Sycara"
] |
Multi-agent reinforcement learning in mixed-motive settings presents a fundamental challenge: agents must balance individual interests with collective goals, which are neither fully aligned nor strictly opposed. To address this, reward restructuring methods such as gifting and intrinsic motivation have been proposed. However, these approaches primarily focus on promoting cooperation by managing the trade-off between individual and collective returns, without explicitly addressing fairness with respect to agents’ task-specific rewards. In this paper, we propose an adaptive conflict-aware gradient adjustment method that promotes cooperation while ensuring fairness in individual rewards. The proposed method dynamically balances policy gradients derived from individual and collective objectives in situations where the two objectives are in conflict. By explicitly resolving such conflicts, our method improves collective performance while preserving fairness across agents. We provide theoretical results that guarantee monotonic non-decreasing improvement in both the collective and individual objectives and ensure fairness. Empirical results in sequential social dilemma environments demonstrate that our approach outperforms baselines in terms of social welfare, while maintaining fairness.
|
https://openreview.net/forum?id=yPsJ1PKiAi
|
Main
|
Spotlight
|
yPsJ1PKiAi
|
Decomposing Interventional Causality into Synergistic, Redundant, and Unique Components
|
[
"Abel Jansma"
] |
We introduce a novel framework for decomposing interventional causal effects into synergistic, redundant, and unique components, building on the intuition of Partial Information Decomposition (PID) and the principle of Möbius inversion. While recent work has explored a similar decomposition of an observational measure, we argue that a proper causal decomposition must be interventional in nature. We develop a mathematical approach that systematically quantifies how causal power is distributed among variables in a system, using a recently derived closed-form expression for the Möbius function of the redundancy lattice. The formalism is then illustrated by decomposing the causal power in logic gates, cellular automata, and chemical reaction networks. Our results reveal how the distribution of causal power can be context- and parameter-dependent. The decomposition provides new insights into complex systems by revealing how causal influences are shared and combined among multiple variables, with potential applications ranging from attribution of responsibility in legal or AI systems, to the analysis of biological networks or climate models.
|
https://openreview.net/forum?id=yPnEvPq3kV
|
Main
|
Spotlight
|
yPnEvPq3kV
|
FALQON: Accelerating LoRA Fine-tuning with Low-Bit Floating-Point Arithmetic
|
[
"Kanghyun Choi",
"Hyeyoon Lee",
"SunJong Park",
"Dain Kwon",
"Jinho Lee"
] |
Low-bit floating-point (FP) formats, such as FP8, provide significant acceleration and memory savings in model training thanks to native hardware support on modern GPUs and NPUs. However, we analyze that FP8 quantization offers speedup primarily for large-dimensional matrix multiplications, while inherent quantization overheads diminish speedup when applied to low-rank adaptation (LoRA), which uses small-dimensional matrices for efficient fine-tuning of large language models (LLMs). To address this limitation, we propose FALQON, a novel framework that eliminates the quantization overhead from separate LoRA computational paths by directly merging LoRA adapters into an FP8-quantized backbone during fine-tuning. Furthermore, we reformulate the forward and backward computations for merged adapters to significantly reduce quantization overhead, and introduce a row-wise proxy update mechanism that efficiently integrates substantial updates into the quantized backbone. Experimental evaluations demonstrate that FALQON achieves approximately a 3$\times$ training speedup over existing quantized LoRA methods with a similar level of accuracy, providing a practical solution for efficient large-scale model fine-tuning. Moreover, FALQON’s end-to-end FP8 workflow removes the need for post-training quantization, facilitating efficient deployment. Code is available at https://github.com/iamkanghyunchoi/falqon.
|
https://openreview.net/forum?id=yPXOfBoQL7
|
Main
|
Poster
|
yPXOfBoQL7
|
PDPO: Parametric Density Path Optimization
|
[
"Sebastian Gutierrez Hernandez",
"Peng Chen",
"Hao-Min Zhou"
] |
We introduce Parametric Density Path Optimization (PDPO), a novel method for computing action-minimizing paths between probability densities. The core idea is to represent the target probability path as the pushforward of a reference density through a parametric map, transforming the original infinite-dimensional optimization over densities to a finite-dimensional one over the parameters of the map. We derive a static formulation of the dynamic problem of action minimization and propose cubic spline interpolation of the path in parameter space to solve the static problem. Theoretically, we establish an error bound of the action under proper assumptions on the regularity of the parameter path. Empirically, we find that using 3–5 control points of the spline interpolation suffices to accurately resolve both multimodal and high-dimensional problems. We demonstrate that
PDPO can flexibly accommodate a wide range of potential terms, including those modeling obstacles, mean-field interactions, stochastic control, and higher-order dynamics. Our method outperforms existing state-of-the-art approaches in benchmark tasks, demonstrating superior computational efficiency and solution quality.
|
https://openreview.net/forum?id=yPDNQvuyYM
|
Main
|
Poster
|
yPDNQvuyYM
|
BioCLIP 2: Emergent Properties from Scaling Hierarchical Contrastive Learning
|
[
"Jianyang Gu",
"Samuel Stevens",
"Elizabeth G Campolongo",
"Matthew J Thompson",
"Net Zhang",
"Jiaman Wu",
"Andrei Kopanev",
"Zheda Mai",
"Alexander E. White",
"James Balhoff",
"Wasla Dahdul",
"Daniel Rubenstein",
"Hilmar Lapp",
"Tanya Berger-Wolf",
"Wei-Lun Chao",
"Yu Su"
] |
Foundation models trained at scale exhibit remarkable emergent behaviors, learning new capabilities beyond their initial training objectives. We find such emergent behaviors in biological vision models via large-scale contrastive vision-language training. To achieve this, we first curate TreeOfLife-200M, comprising 214 million images of living organisms, the largest and most diverse biological organism image dataset to date. We then train BioCLIP 2 on TreeOfLife-200M to distinguish different species. Despite the narrow training objective, BioCLIP 2 yields extraordinary accuracy when applied to various biological visual tasks such as habitat classification and trait prediction. We identify emergent properties in the learned embedding space of BioCLIP 2. At the inter-species level, the embedding distribution of different species aligns closely with functional and ecological meanings (e.g., beak sizes and habitats). At the intra-species level, instead of being diminished, the intra-species variations (e.g., life stages and sexes) are preserved and better separated in subspaces orthogonal to inter-species distinctions. We provide formal proof and analyses to explain why hierarchical supervision and contrastive objectives encourage these emergent properties. Crucially, our results reveal that these properties become increasingly significant with larger-scale training data, leading to a biologically meaningful embedding space.
|
https://openreview.net/forum?id=yPC9zmkQgG
|
Main
|
Spotlight
|
yPC9zmkQgG
|
Efficient Prompt Compression with Evaluator Heads for Long-Context Transformer Inference
|
[
"Weizhi Fei",
"Xueyan Niu",
"XIE GUOQING",
"Yingqing Liu",
"Bo Bai",
"Wei Han"
] |
Although applications involving long-context inputs are crucial for the effective utilization of large language models (LLMs), they also result in increased computational costs and reduced performance. To address this challenge, we propose an efficient, training-free prompt compression method that retains key information within compressed prompts. We identify specific attention heads in transformer-based LLMs, which we designate as evaluator heads, that are capable of selecting tokens in long inputs that are most significant for inference. Building on this discovery, we develop EHPC, an Evaluator Head-based Prompt Compression method, which enables LLMs to rapidly "skim through'' input prompts by leveraging only the first few layers with evaluator heads during the pre-filling stage, subsequently passing only the important tokens to the model for inference. EHPC achieves state-of-the-art results across two mainstream benchmarks: prompt compression and long-context inference acceleration. Consequently, it effectively improves performance with the reduced costs associated with commercial API calls compared to prompt compressing methods. We further demonstrate that EHPC attains competitive results compared to key-value cache-based acceleration methods, thereby highlighting its potential to enhance the efficiency of LLMs for long-context tasks.
|
https://openreview.net/forum?id=yOs12gdsaL
|
Main
|
Spotlight
|
yOs12gdsaL
|
Logic-in-Frames: Dynamic Keyframe Search via Visual Semantic-Logical Verification for Long Video Understanding
|
[
"Weiyu Guo",
"Ziyang Chen",
"Shaoguang Wang",
"Jianxiang He",
"Yijie Xu",
"Jinhui Ye",
"Ying Sun",
"Hui Xiong"
] |
Understanding long video content is a complex endeavor that often relies on densely sampled frame captions or end-to-end feature selectors, yet these techniques commonly overlook the logical relationships between textual queries and visual elements. In practice, computational constraints necessitate coarse frame subsampling, a challenge analogous to “finding a needle in a haystack.” To address this issue, we introduce a semantics-driven search framework that reformulates keyframe selection under the paradigm of Visual Semantic-Logical Search (VSLS). Specifically, we systematically define four fundamental logical dependencies: 1) spatial co-occurrence, 2) temporal proximity, 3) attribute dependency, and 4) causal order. These relations dynamically update frame sampling distributions through an iterative refinement process, enabling context-aware identification of semantically critical frames tailored to specific query requirements. Our method establishes new state-of-the-art performance on the manually annotated benchmark in keyframe selection metrics. Furthermore, when applied to downstream video question-answering tasks, the proposed approach demonstrates the best performance gains over existing methods on LongVideoBench and Video-MME, validating its effectiveness in bridging the logical gap between textual queries and visual-temporal reasoning. The code will be publicly available.
|
https://openreview.net/forum?id=yONFNHGoeP
|
Main
|
Poster
|
yONFNHGoeP
|
The Future Unmarked: Watermark Removal in AI-Generated Images via Next-Frame Prediction
|
[
"Huming Qiu",
"Zhaoxiang Wang",
"Mi Zhang",
"Xiaohan Zhang",
"Xiaoyu You",
"Min Yang"
] |
Image watermarking embeds imperceptible signals into AI-generated images for deepfake detection and provenance verification. Although recent semantic-level watermarking methods demonstrate strong resistance against conventional pixel-level removal attacks, their robustness against more advanced removal strategies remains underexplored, raising concerns about their reliability in practical scenarios. Existing removal attacks primarily operate in the pixel domain without altering image semantics, which limits their effectiveness against semantic-level watermarks.
In this paper, we propose Next Frame Prediction Attack (NFPA), the first semantic-level removal attack. Unlike pixel-level attacks, NFPA formulates watermark removal as a video generation task: it treats the watermarked image as the initial frame and aims to subtly manipulate the image semantics to generate the next-frame image, i.e., the unwatermarked image.
We conduct a comprehensive evaluation on eight state-of-the-art image watermarking schemes, demonstrating that NFPA consistently outperforms thirteen removal attack baselines in terms of the trade-off between watermark removal and image quality. Our results reveal the vulnerabilities of current image watermarking methods and highlight the urgent need for more robust watermarks.
|
https://openreview.net/forum?id=yO2zE1yIYZ
|
Main
|
Poster
|
yO2zE1yIYZ
|
Deeper with Riemannian Geometry: Overcoming Oversmoothing and Oversquashing for Graph Foundation Models
|
[
"Li Sun",
"Zhenhao Huang",
"Ming Zhang",
"Philip S. Yu"
] |
Message Passing Neural Networks (MPNNs) are the building block of graph foundation models, but fundamentally suffer from oversmoothing and oversquashing. There has recently been a surge of interest in fixing both issues. Existing efforts primarily adopt global approaches, which may be beneficial in some regions but detrimental in others, ultimately leading to the suboptimal expressiveness. In this paper, we begin by revisiting oversquashing through a global measure -- spectral gap $\lambda$ -- and prove that the increase of $\lambda$ leads to gradient vanishing with respect to the input features, thereby undermining the effectiveness of message passing. Motivated by such theoretical insights, we propose a local approach that adaptively adjusts message passing based on local structures. To achieve this, we connect local Riemannian geometry with MPNNs, and establish a novel nonhomogeneous boundary condition to address both oversquashing and oversmoothing. Building on the Robin condition, we design a GBN network with local bottleneck adjustment, coupled with theoretical guarantees. Extensive experiments on homophilic and heterophilic graphs show the expressiveness of GBN. Furthermore, GBN does not exhibit performance degradation even when the network depth exceeds $256$ layers.
|
https://openreview.net/forum?id=yNej0aGtAZ
|
Main
|
Poster
|
yNej0aGtAZ
|
Semantic and Visual Crop-Guided Diffusion Models for Heterogeneous Tissue Synthesis in Histopathology
|
[
"Saghir Alfasly",
"Wataru Uegami",
"MD ENAMUL HOQ",
"Ghazal Alabtah",
"Hamid Tizhoosh"
] |
Synthetic data generation in histopathology faces unique challenges: preserving tissue heterogeneity, capturing subtle morphological features, and scaling to unannotated datasets. We present a latent diffusion model that generates realistic heterogeneous histopathology images through a novel dual-conditioning approach combining semantic segmentation maps with tissue-specific visual crops. Unlike existing methods that rely on text prompts or abstract visual embeddings, our approach preserves critical morphological details by directly incorporating raw tissue crops from corresponding semantic regions. For annotated datasets (i.e., Camelyon16, Panda), we extract patches ensuring 20-80% tissue heterogeneity. For unannotated data (i.e., TCGA), we introduce a self-supervised extension that clusters whole-slide images into 100 tissue types using foundation model embeddings, automatically generating pseudo-semantic maps for training. Our method synthesizes high-fidelity images with precise region-wise annotations, achieving superior performance on downstream segmentation tasks. When evaluated on annotated datasets, models trained on our synthetic data show competitive performance to those trained on real data, demonstrating the utility of controlled heterogeneous tissue generation. In quantitative evaluation, prompt‐guided synthesis reduces Fréchet Distance by up to 6× on Camelyon16 (from 430.1 to 72.0) and yields 2–3× lower FD across Panda and TCGA. Downstream DeepLabv3+ models trained solely on synthetic data attain test IoU of 0.71 and 0.95 on Camelyon16 and Panda, within 1–2% of real‐data baselines (0.72 and 0.96). By scaling to 11,765 TCGA whole‐slide images without manual annotations, our framework offers a practical solution for an urgent need for generating diverse, annotated histopathology data, addressing a critical bottleneck in computational pathology.
|
https://openreview.net/forum?id=yNVDkAjGjw
|
Main
|
Poster
|
yNVDkAjGjw
|
Visual Jenga: Discovering Object Dependencies via Counterfactual Inpainting
|
[
"Anand Bhattad",
"Konpat Preechakul",
"Alexei A Efros"
] |
This paper proposes a novel scene understanding task called Visual Jenga. Drawing inspiration from the game Jenga, the proposed task involves progressively removing objects from a single image until only the background remains. Just as Jenga players must understand structural dependencies to maintain tower stability, our task reveals the intrinsic relationships between scene elements by systematically exploring which objects can be removed while preserving scene coherence in both physical and geometric sense. As a starting point for tackling the Visual Jenga task, we propose a simple, data-driven, training-free approach that is surprisingly effective on a range of real-world images. The principle behind our approach is to utilize the asymmetry in the pairwise relationships between objects within a scene and employ a large inpainting model to generate a set of counterfactuals to quantify the asymmetry.
|
https://openreview.net/forum?id=yMXn86pzWx
|
Main
|
Poster
|
yMXn86pzWx
|
DIFFSSR: Stereo Image Super-resolution Using Differential Transformer
|
[
"Dafeng Zhang"
] |
In the field of computer vision, the task of stereo image super-resolution (StereoSR) has garnered significant attention due to its potential applications in augmented reality, virtual reality, and autonomous driving. Traditional Transformer-based models, while powerful, often suffer from attention noise, leading to suboptimal reconstruction issues in super-resolved images. This paper introduces DIFFSSR, a novel neural network architecture designed to address these challenges. We introduce the Diff Cross Attention Block (DCAB) and the Sliding Stereo Cross-Attention Module (SSCAM) to enhance feature integration and mitigate the impact of attention noise. The DCAB differentiates between relevant and irrelevant context, amplifying attention to important features and canceling out noise. The SSCAM, with its sliding window mechanism and disparity-based attention, adapts to local variations in stereo images, preserving details, and addressing the performance degradation due to misalignment of horizontal epipolar lines in stereo images. Extensive experiments on benchmark datasets demonstrate that DIFFSSR outperforms state-of-the-art methods, including NAFSSR and SwinFIRSSR, in terms of both quantitative metrics and visual quality.
|
https://openreview.net/forum?id=yLApxMEja7
|
Main
|
Poster
|
yLApxMEja7
|
Iterative Tool Usage Exploration for Multimodal Agents via Step-wise Preference Tuning
|
[
"Pengxiang Li",
"Zhi Gao",
"Bofei Zhang",
"Yapeng Mi",
"Xiaojian Ma",
"Chenrui Shi",
"Tao Yuan",
"Yuwei Wu",
"Yunde Jia",
"Song-Chun Zhu",
"Qing Li"
] |
Multimodal agents, which integrate a controller (e.g., a vision language model) with external tools, have demonstrated remarkable capabilities in tackling complex multimodal tasks.
Existing approaches for training these agents, both supervised fine-tuning and reinforcement learning, depend on extensive human-annotated task-answer pairs and tool trajectories.
However, for complex multimodal tasks, such annotations are prohibitively expensive or impractical to obtain.
In this paper, we propose an iterative tool usage exploration method for multimodal agents without any pre-collected data, namely SPORT, via step-wise preference optimization to refine the trajectories of tool usage. Our method enables multimodal agents to autonomously discover effective tool usage strategies through self-exploration and optimization, eliminating the bottleneck of human annotation.
SPORT has four iterative components: task synthesis, step sampling, step verification, and preference tuning.
We first synthesize multimodal tasks using language models.
Then, we introduce a novel trajectory exploration scheme, where step sampling and step verification are executed alternately to solve synthesized tasks.
In step sampling, the agent tries different tools and obtains corresponding results.
In step verification, we employ a verifier to provide AI feedback to construct step-wise preference data.
The data is subsequently used to update the controller for tool usage through preference tuning, producing a SPORT agent.
By interacting with real environments, the SPORT agent gradually evolves into a more refined and capable system.
Evaluation in the GTA and GAIA benchmarks shows that the SPORT agent achieves 6.41% and 3.64% improvements, underscoring the generalization and effectiveness introduced by our method.
|
https://openreview.net/forum?id=yKUwkihcsi
|
Main
|
Poster
|
yKUwkihcsi
|
Beyond $\tilde{O}(\sqrt{T})$ Constraint Violation for Online Convex Optimization with Adversarial Constraints
|
[
"Abhishek Sinha",
"Rahul Vaze"
] |
We study Online Convex Optimization with adversarial constraints (COCO). At each round a learner selects an action from a convex decision set and then an adversary reveals a convex cost and a convex constraint function. The goal of the learner is to select a sequence of actions to minimize both regret and the cumulative constraint violation (CCV) over a horizon of length $T$. The best-known policy for this problem achieves $O(\sqrt{T})$ regret and $\tilde{O}(\sqrt{T})$ CCV. In this paper, we improve this by trading off regret to achieve substantially smaller CCV. This trade-off is especially important in safety-critical applications, where satisfying the safety constraints is non-negotiable. Specifically, for any bounded convex cost and constraint functions, we propose an online policy that achieves $\tilde{O}(\sqrt{dT}+ T^\beta)$ regret and $\tilde{O}(dT^{1-\beta})$ CCV, where $d$ is the dimension of the decision set and $\beta \in [0,1]$ is a tunable parameter. We begin with a special case, called the $\textsf{Constrained Expert}$ problem, where the decision set is a probability simplex and the cost and constraint functions are linear. Leveraging a new adaptive small-loss regret bound, we propose a computationally efficient policy for the $\textsf{Constrained Expert}$ problem, that attains $O(\sqrt{T\ln N}+T^{\beta})$ regret and $\tilde{O}(T^{1-\beta} \ln N)$ CCV for $N$ number of experts. The original problem is then reduced to the $\textsf{Constrained Expert}$ problem via a covering argument. Finally, with an additional $M$-smoothness assumption, we propose a computationally efficient first-order policy attaining $O(\sqrt{MT}+T^{\beta})$ regret and $\tilde{O}(MT^{1-\beta})$ CCV.
|
https://openreview.net/forum?id=yK4Xu7DDd6
|
Main
|
Poster
|
yK4Xu7DDd6
|
Mint: A Simple Test-Time Adaptation of Vision-Language Models against Common Corruptions
|
[
"Wenxuan Bao",
"Ruxi Deng",
"Jingrui He"
] |
Pretrained vision-language models such as CLIP achieve strong zero-shot generalization but remain vulnerable to distribution shifts caused by input corruptions. In this work, we investigate how corruptions affect CLIP’s image embeddings and uncover a consistent phenomenon we term as embedding variance collapse, where both intra-class and inter-class variances shrink as corruption severity increases. We find that this collapse is closely tied to performance degradation, with inter-class variance strongly correlated with classification accuracy. To explain this phenomenon, we analyze how corruptions alter the structure of the embedding space. Our theoretical results suggest that the visual encoder tends to encode corruption-related signals, which dilute class-discriminative features and compress the representation geometry. We further show that maximizing inter-class variance, even when estimated from pseudo-labels, can provably enhance embedding quality. Based on this insight, we propose Mint, a simple test-time adaptation method that maximizes pseudo-label-based inter-class variance on the fly using a mean accumulator and a gradient accumulator. Mint operates effectively with small batch sizes and consistently improves performance across multiple corruption benchmarks and CLIP architectures. Our code is available at https://github.com/baowenxuan/Mint.
|
https://openreview.net/forum?id=yJpBVE4vfo
|
Main
|
Poster
|
yJpBVE4vfo
|
Novel Exploration via Orthogonality
|
[
"Andreas Theophilou",
"Özgür Şimşek"
] |
Efficient exploration remains one of the key open problems in reinforcement
learning. Discovering novel states or transitions efficiently requires policies that
effectively direct the agent away from regions of the state space that are already
well explored. We introduce Novel Exploration via Orthogonality (NEO), an
approach that automatically uncovers not only which regions of the environment
are novel but also how to reach them by leveraging Laplacian representations. NEO
uses the eigenvectors of a modified graph Laplacian to induce gradient flows from
states that are frequently visited (less novel) to states that are seldom visited (more
novel). We show that NEO’s modified Laplacian yields eigenvectors whose extreme
values align with the most novel regions of the state space. We provide bounds
for the eigenvalues of the modified Laplacian; and we show that the smoothest
eigenvectors with real eigenvalues below certain thresholds provide guaranteed
gradients to novel states for both undirected and directed graphs. In an empirical
evaluation in online, incremental settings, NEO outperformed related state-of-the-
art approaches, including eigen-options and cover options, in a large collection of
undirected and directed domains with varying structures.
|
https://openreview.net/forum?id=yJS1eZSNUv
|
Main
|
Poster
|
yJS1eZSNUv
|
MISA: Memory-Efficient LLMs Optimization with Module-wise Importance Sampling
|
[
"Yuxi Liu",
"Renjia Deng",
"Yutong He",
"Xue Wang",
"Tao Yao",
"Kun Yuan"
] |
The substantial memory demands of pre-training and fine-tuning large language models (LLMs) require memory-efficient optimization algorithms. One promising approach is layer-wise optimization, which treats each transformer block as a single layer and optimizes it sequentially, while freezing the other layers to save optimizer states and activations. Although effective, these methods ignore the varying importance of the modules within each layer, leading to suboptimal performance. Moreover, layer-wise sampling provides only limited memory savings, as at least one full layer must remain active during optimization. To overcome these limitations, we propose **M**odule-wise **I**mportance **SA**mpling (**MISA**), a novel method that divides each layer into smaller modules and assigns importance scores to each module.
MISA uses a weighted random sampling mechanism to activate modules, provably reducing
gradient variance compared to layer-wise sampling.
Additionally, we establish an $\mathcal{O}(1/\sqrt{K})$ convergence rate under non-convex and stochastic conditions, where $K$ is the total number of training steps, and provide a detailed memory analysis showcasing MISA's superiority over existing baseline methods. Experiments on diverse learning tasks validate the effectiveness of MISA.
|
https://openreview.net/forum?id=yISJGSdzdd
|
Main
|
Poster
|
yISJGSdzdd
|
Prompt Tuning Decision Transformers with Structured and Scalable Bandits
|
[
"Finn Rietz",
"Oleg Smirnov",
"Sara Karimi",
"Lele Cao"
] |
Prompt tuning has emerged as a key technique for adapting large pre-trained Decision Transformers (DTs) in offline Reinforcement Learning (RL), particularly in multi-task and few-shot settings. The Prompting Decision Transformer (PDT) enables task generalization via trajectory prompts sampled uniformly from expert demonstrations -- without accounting for prompt informativeness. In this work, we propose a bandit-based prompt-tuning method that learns to construct optimal trajectory prompts from demonstration data at inference time. We devise a structured bandit architecture operating in the trajectory prompt space, achieving linear rather than combinatorial scaling with prompt size. Additionally, we show that the pre-trained PDT itself can serve as a powerful feature extractor for the bandit, enabling efficient reward modeling across various environments. We theoretically establish regret bounds and demonstrate empirically that our method consistently enhances performance across a wide range of tasks, high-dimensional environments, and out-of-distribution scenarios, outperforming existing baselines in prompt tuning.
|
https://openreview.net/forum?id=yI55mj6anU
|
Main
|
Poster
|
yI55mj6anU
|
MAPLE: Multi-scale Attribute-enhanced Prompt Learning for Few-shot Whole Slide Image Classification
|
[
"Junjie Zhou",
"WEI SHAO",
"Yagao Yue",
"Wei Mu",
"Peng Wan",
"Qi Zhu",
"Daoqiang Zhang"
] |
Prompt learning has emerged as a promising paradigm for adapting pre-trained vision-language models (VLMs) to few-shot whole slide image (WSI) classification by aligning visual features with textual representations, thereby reducing annotation cost and enhancing model generalization. Nevertheless, existing methods typically rely on slide-level prompts and fail to capture the subtype-specific phenotypic variations of histological entities (e.g., nuclei, glands) that are critical for cancer diagnosis. To address this gap, we propose Multi-scale Attribute-enhanced Prompt Learning (MAPLE), a hierarchical framework for few-shot WSI classification that jointly integrates multi-scale visual semantics and performs prediction at both the entity and slide levels. Specifically, we first leverage large language models (LLMs) to generate entity-level prompts that can help identify multi-scale histological entities and their phenotypic attributes, as well as slide-level prompts to capture global visual descriptions. Then, an entity-guided cross-attention module is proposed to generate entity-level features, followed by aligning with their corresponding subtype-specific attributes for fine-grained entity-level prediction. To enrich entity representations, we further develop a cross-scale entity graph learning module that can update these representations by capturing their semantic correlations within and across scales. The refined representations are then aggregated into a slide-level representation and aligned with the corresponding prompts for slide-level prediction. Finally, we combine both entity-level and slide-level outputs to produce the final prediction results. Results on three cancer cohorts confirm the effectiveness of our approach in addressing few-shot pathology diagnosis tasks.
|
https://openreview.net/forum?id=yHi8Ao6GAe
|
Main
|
Poster
|
yHi8Ao6GAe
|
Alligat0R: Pre-Training through Covisibility Segmentation for Relative Camera Pose Regression
|
[
"Thibaut Loiseau",
"Guillaume Bourmaud",
"Vincent Lepetit"
] |
Pre-training techniques have greatly advanced computer vision, with CroCo’s cross-view completion approach yielding impressive results in tasks like 3D reconstruction and pose regression. However, cross-view completion is ill-posed in non-covisible regions, limiting its effectiveness. We introduce Alligat0R, a novel pre-training approach that replaces cross-view learning with a covisibility segmentation task. Our method predicts whether each pixel in one image is covisible in the second image, occluded, or outside the field of view, making the pre-training effective in both covisible and non-covisible regions, and provides interpretable predictions. To support this, we present Cub3, a large-scale dataset with 5M image pairs and dense covisibility annotations derived from the nuScenes and ScanNet datasets. Cub3 includes diverse scenarios with varying degrees of overlap. The experiments show that our novel pre-training method Alligat0R significantly outperforms CroCo in relative pose regression. Alligat0R and Cub3 will be made publicly available.
|
https://openreview.net/forum?id=yHJRI6rzaA
|
Main
|
Spotlight
|
yHJRI6rzaA
|
Rebalancing Return Coverage for Conditional Sequence Modeling in Offline Reinforcement Learning
|
[
"Wensong Bai",
"Chufan Chen",
"Yichao Fu",
"Qihang Xu",
"Chao Zhang",
"Hui Qian"
] |
Recent advancements in offline reinforcement learning (RL) have underscored the capabilities of conditional sequence modeling (CSM), a paradigm that models the action distribution conditioned on both historical trajectories and target returns associated with each state. However, due to the imbalanced return distribution caused by suboptimal datasets, CSM is grappling with a serious distributional shift problem when conditioning on high returns. While recent approaches attempt to empirically tackle this challenge through return rebalancing techniques such as weighted sampling and value-regularized supervision, the relationship between return rebalancing and the performance of CSM methods is not well understood. In this paper, we reveal that both expert-level and full-spectrum return-coverage critically influence the performance and sample efficiency of CSM policies. Building on this finding, we devise a simple yet effective return-coverage rebalancing mechanism that can be seamlessly integrated into common CSM frameworks, including the most widely used one, Decision Transformer (DT). The resulting CSM algorithm, referred to as Return-rebalanced Value-regularized Decision Transformer (RVDT), integrates both implicit and explicit return-coverage rebalancing mechanisms, and achieves state-of-the-art performance in the D4RL experiments.
|
https://openreview.net/forum?id=yGf8DSwR09
|
Main
|
Poster
|
yGf8DSwR09
|
DiffLiG: Diffusion-enhanced Liquid Graph with Attention Propagation for Grid-to-Station Precipitation Correction
|
[
"Yuxiang Li",
"Yang Zhang",
"Guowen Li",
"Mengxuan Chen",
"Meng Jin",
"Fang Wang",
"Haohuan Fu",
"Juepeng Zheng"
] |
Modern precipitation forecasting systems, including reanalysis datasets, numerical models, and AI-based approaches, typically produce coarse-resolution gridded outputs. The process of converting these outputs to station-level predictions often introduces substantial spatial biases relative to station-level observations, especially in complex terrains or under extreme conditions. These biases stem from two core challenges: (i) $\textbf{station-level heterogeneity}$, with site-specific temporal and spatial dynamics; and (ii) $\textbf{oversmoothing}$, which blurs fine-scale variability in graph-based models. To address these issues, we propose $\textbf{DiffLiG}$ ($\underline{Diff}$usion-enhanced $\underline{Li}$quid $\underline{G}$raph with Attention Propagation), a graph neural network designed for precise spatial correction from gridded forecasts to station observations. DiffLiG integrates a GeoLiquidNet that adapts temporal encoding via site-aware OU dynamics, a graph neural network with a dynamic edge modulator that learns spatially adaptive connectivity, and a Probabilistic Diffusion Selector that generates and refines ensemble forecasts to mitigate oversmoothing. Experiments across multiple datasets show that DiffLiG consistently outperforms other methods, delivering more accurate and robust corrections across diverse geographic and climatic settings. Moreover, it achieves notable gains on other key meteorological variables, underscoring its generalizability and practical utility.
|
https://openreview.net/forum?id=yGWLWjM4nq
|
Main
|
Poster
|
yGWLWjM4nq
|
KVCOMM: Online Cross-context KV-cache Communication for Efficient LLM-based Multi-agent Systems
|
[
"Hancheng Ye",
"Zhengqi Gao",
"Mingyuan Ma",
"Qinsi Wang",
"Yuzhe Fu",
"Ming-Yu Chung",
"Yueqian Lin",
"Zhijian Liu",
"Jianyi Zhang",
"Danyang Zhuo",
"Yiran Chen"
] |
Multi-agent large language model (LLM) systems are increasingly adopted for complex language processing tasks that require communication and coordination among agents. However, these systems often suffer substantial overhead from repeated reprocessing of overlapping contexts across agents. In typical pipelines, once an agent receives a message from its predecessor, the full context-including prior turns-must be reprocessed from scratch, leading to inefficient processing. While key-value (KV) caching is an effective solution for avoiding redundant computation in single-agent settings where prefixes remain unchanged, it cannot be directly reused in multi-agent scenarios due to diverging prefixes introduced by agent-specific context extensions. We identify that the core challenge lies in the offset variance of KV-caches across agents. To address this, we propose **KVCOMM**, a training-free framework that enables efficient prefilling in multi-agent inference by reusing KV-caches and aligning cache offsets of overlapping contexts under diverse prefix contexts. KVCOMM estimates and adjusts KV-caches for shared content by referencing a pool of cached examples—termed *anchors*—that store observed cache deviations under varying prefixes. The anchor pool is maintained and updated online, allowing dynamic adaptation to distinct user requests and context structures. KVCOMM achieves over 70% reuse rate across diverse multi- agent workloads, including retrieval-augmented generation, math reasoning, and collaborative coding tasks, all without quality degradation. Particularly, when each fully-connected agent receives 1K input tokens with 512 prefix tokens and 512 output tokens under a five-agent setting, KVCOMM achieves up to 7.8× speedup compared to the standard prefill pipeline, reducing TTFT from ∼430ms to ∼55ms. Code is available at https://github.com/FastMAS/KVCOMM.
|
https://openreview.net/forum?id=yGOytgjurF
|
Main
|
Poster
|
yGOytgjurF
|
Fisher meets Feynman: score-based variational inference with a product of experts
|
[
"Diana Cai",
"Robert M. Gower",
"David Blei",
"Lawrence K. Saul"
] |
We introduce a highly expressive yet distinctly tractable family for black-box variational inference (BBVI). Each member of this family is a weighted product of experts (PoE), and each weighted expert in the product is proportional to a multivariate $t$-distribution. These products of experts can model distributions with skew, heavy tails, and multiple modes, but to use them for BBVI, we must be able to sample from their densities. We show how to do this by reformulating these products of experts as latent variable models with auxiliary Dirichlet random variables. These Dirichlet variables emerge from a Feynman identity, originally developed for loop integrals in quantum field theory, that expresses the product of multiple fractions (or in our case, $t$-distributions) as an integral over the simplex. We leverage this simplicial latent space to draw weighted samples from these products of experts---samples which BBVI then uses to find the PoE that best approximates a target density. Given a collection of experts, we derive an iterative procedure to optimize the exponents that determine their geometric weighting in the PoE. At each iteration, this procedure minimizes a regularized Fisher divergence to match the scores of the variational and target densities at a batch of samples drawn from the current approximation. This minimization reduces to a convex quadratic program, and we prove under general conditions that these updates converge exponentially fast to a near-optimal weighting of experts. We conclude by evaluating this approach on a variety of synthetic and real-world target distributions.
|
https://openreview.net/forum?id=yG8vmj3EAU
|
Main
|
Spotlight
|
yG8vmj3EAU
|
4D-VLA: Spatiotemporal Vision-Language-Action Pretraining with Cross-Scene Calibration
|
[
"Jiahui Zhang",
"Yurui Chen",
"Yueming Xu",
"Ze Huang",
"Yanpeng Zhou",
"Yu-Jie Yuan",
"Xinyue Cai",
"Guowei Huang",
"Xingyue Quan",
"Hang Xu",
"Li Zhang"
] |
Leveraging diverse robotic data for pretraining remains a critical challenge. Existing methods typically model the dataset’s action distribution using simple observations as inputs. However, these inputs are often incomplete, resulting in a dispersed conditional action distribution—an issue we refer to as coordinate system chaos and state chaos. This inconsistency significantly hampers pretraining efficiency. To address this, we propose 4D-VLA, a novel approach that effectively integrates 4D information into the input to mitigate these sources of chaos. Our model introduces depth and temporal information into visual features with sequential RGB-D inputs, aligning the coordinate systems of the robot and the scene. This alignment endows the model with strong spatiotemporal reasoning capabilities while minimizing training overhead. Additionally, we introduce Memory bank sampling, a frame sampling strategy designed to extract informative frames from historical images, further improving effectiveness and efficiency. Experimental results demonstrate that our pretraining method and architectural components substantially enhance model performance. In both simulated and real-world experiments, our model achieves a significant increase in success rate over OpenVLA.To further assess spatial perception and generalization to novel views, we introduce MV-Bench, a multi-view simulation benchmark. Our model consistently outperforms existing methods, demonstrating stronger spatial understanding and adaptability.
|
https://openreview.net/forum?id=yFjgV3cJje
|
Main
|
Poster
|
yFjgV3cJje
|
Learning “Partner-Aware” Collaborators in Multi-Party Collaboration
|
[
"Abhijnan Nath",
"Nikhil Krishnaswamy"
] |
Large Language Models (LLMs) are increasingly bring deployed in agentic settings where they act as collaborators with humans. Therefore, it is increasingly important to be able to evaluate their abilities to collaborate effectively in multi-turn, multi-party tasks. In this paper, we build on the AI alignment and “safe interruptability” literature to offer novel theoretical insights on collaborative behavior between LLM-driven *collaborator agents* and an *intervention agent*. Our goal is to learn an ideal “partner-aware” collaborator that increases the group’s common-ground (CG)—alignment on task-relevant propositions—by intelligently collecting information provided in *interventions* by a partner agent.We show how LLM agents trained using standard RLHF and related approaches are naturally inclined to ignore possibly well-meaning interventions, which makes increasing group common ground non-trivial in this setting. We employ a two-player Modified-Action MDP to examine this suboptimal behavior of standard AI agents, and propose **Interruptible Collaborative Roleplayer (ICR)**—a novel “partner-aware” learning algorithm to train CG-optimal collaborators. Experiments on multiple collaborative task environments show that ICR, on average, is more capable of promoting successful CG convergence and exploring more diverse solutions in such tasks.
|
https://openreview.net/forum?id=yFfWVr2TmZ
|
Main
|
Poster
|
yFfWVr2TmZ
|
Distil-E2D: Distilling Image-to-Depth Priors for Event-Based Monocular Depth Estimation
|
[
"Jie Long Lee",
"Gim Hee Lee"
] |
Event cameras are neuromorphic vision sensors that asynchronously capture pixel-level intensity changes with high temporal resolution and dynamic range. These make them well suited for monocular depth estimation under challenging lighting conditions. However, progress in event-based monocular depth estimation remains constrained by the quality of supervision: LiDAR-based depth labels are inherently sparse, spatially incomplete, and prone to artifacts. Consequently, these signals are suboptimal for learning dense depth from sparse events. To address this problem, we propose Distil-E2D, a framework that distills depth priors from the image domain into the event domain by generating dense synthetic pseudolabels from co-recorded APS or RGB frames using foundational depth models. These pseudolabels complement sparse LiDAR depths with dense semantically rich supervision informed by large-scale image-depth datasets. To reconcile discrepancies between synthetic and real depths, we introduce a Confidence-Guided Calibrated Depth Loss that learns nonlinear depth alignment and adaptively weights supervision by alignment confidence. Additionally, our architecture integrates past predictions via a Context Transformer and employs a Dual-Decoder Training scheme that enhances encoder representations by jointly learning metric and relative depth abstractions. Experiments on benchmark datasets show that Distil-E2D achieves state-of-the-art performance in event-based monocular depth estimation across both event-only and event+APS settings.
|
https://openreview.net/forum?id=yFerzf9v1b
|
Main
|
Poster
|
yFerzf9v1b
|
Overcoming Sparsity Artifacts in Crosscoders to Interpret Chat-Tuning
|
[
"Julian Minder",
"Clément Dumas",
"Caden Juang",
"Bilal Chughtai",
"Neel Nanda"
] |
Model diffing is the study of how fine-tuning changes a model's representations and internal algorithms.
Many behaviors of interest are introduced during fine-tuning, and model diffing offers a promising lens to interpret such behaviors.
Crosscoders are a recent model diffing method that learns a shared dictionary of interpretable concepts represented as latent directions in both the base and fine-tuned models, allowing us to track how concepts shift or emerge during fine-tuning.
Notably, prior work has observed concepts with no direction in the base model, and it was hypothesized that these model-specific latents were concepts introduced during fine-tuning.
However, we identify two issues which stem from the crosscoders L1 training loss that can misattribute concepts as unique to the fine-tuned model, when they really exist in both models.
We develop Latent Scaling to flag these issues by more accurately measuring each latent's presence across models.
In experiments comparing Gemma 2 2B base and chat models, we observe that the standard crosscoder suffers heavily from these issues. Building on these insights, we train a crosscoder with BatchTopK loss and show that it substantially mitigates these issues, finding more genuinely chat-specific and highly interpretable concepts. We recommend practitioners adopt similar techniques.
Using the BatchTopK crosscoder, we successfully identify a set of chat-specific latents that are both interpretable and causally effective, representing concepts such as false information and personal question, along with multiple refusal-related latents that show nuanced preferences for different refusal triggers.
Overall, our work advances best practices for the crosscoder-based methodology for model diffing and demonstrates that it can provide concrete insights into how chat-tuning modifies model behavior.
|
https://openreview.net/forum?id=yFdNygEryH
|
Main
|
Poster
|
yFdNygEryH
|
SceneDesigner: Controllable Multi-Object Image Generation with 9-DoF Pose Manipulation
|
[
"Zhenyuan Qin",
"Xincheng Shuai",
"Henghui Ding"
] |
Controllable image generation has attracted increasing attention in recent years, enabling users to manipulate visual content such as identity and style. However, achieving simultaneous control over the 9D poses (location, size, and orientation) of multiple objects remains an open challenge. Despite recent progress, existing methods often suffer from limited controllability and degraded quality, falling short of comprehensive multi-object 9D pose control. To address these limitations, we propose ***SceneDesigner***, a method for accurate and flexible multi-object 9-DoF pose manipulation. ***SceneDesigner*** incorporates a branched network to the pre-trained base model and leverages a new representation, ***CNOCS map***, which encodes 9D pose information from the camera view. This representation exhibits strong geometric interpretation properties, leading to more efficient and stable training. To support training, we construct a new dataset, ***ObjectPose9D***, which aggregates images from diverse sources along with 9D pose annotations. To further address data imbalance issues, particularly performance degradation on low-frequency poses, we introduce a two-stage training strategy with reinforcement learning, where the second stage fine-tunes the model using a reward-based objective on rebalanced data. At inference time, we propose ***Disentangled Object Sampling***, a technique that mitigates insufficient object generation and concept confusion in complex multi-object scenes. Moreover, by integrating user-specific personalization weights, ***SceneDesigner*** enables customized pose control for reference subjects. Extensive qualitative and quantitative experiments demonstrate that ***SceneDesigner*** significantly outperforms existing approaches in both controllability and quality.
|
https://openreview.net/forum?id=yFasd68NyI
|
Main
|
Spotlight
|
yFasd68NyI
|
Policy Optimized Text-to-Image Pipeline Design
|
[
"Uri Gadot",
"Rinon Gal",
"Yftah Ziser",
"Gal Chechik",
"Shie Mannor"
] |
Text-to-image generation has evolved beyond single monolithic models to complex multi-component pipelines that combine various enhancement tools. While these pipelines significantly improve image quality, their effective design requires substantial expertise. Recent approaches automating this process through large language models (LLMs) have shown promise but suffer from two critical limitations: extensive computational requirements from generating images with hundreds of predefined pipelines, and poor generalization beyond memorized training examples.
We introduce a novel reinforcement learning-based framework that addresses these inefficiencies. Our approach first trains an ensemble of reward models capable of predicting image quality scores directly from prompt-workflow combinations, eliminating the need for costly image generation during training. We then implement a two-phase training strategy: initial workflow prediction training followed by GRPO-based optimization that guides the model toward higher-performing regions of the workflow space. Additionally, we incorporate a classifier-free guidance based enhancement technique that extrapolates along the path between the initial and GRPO-tuned models, further improving output quality.
We validate our approach through a set of comparisons, showing that it can successfully create new flows with greater diversity and lead to superior image quality compared to existing baselines.
|
https://openreview.net/forum?id=yEq201U9AM
|
Main
|
Poster
|
yEq201U9AM
|
Enhancing Infrared Vision: Progressive Prompt Fusion Network and Benchmark
|
[
"Jinyuan Liu",
"Zihang Chen",
"Zhu Liu",
"Zhiying Jiang",
"Long Ma",
"Xin Fan",
"Risheng Liu"
] |
We engage in the relatively underexplored task named thermal infrared image enhancement. Existing infrared image enhancement methods primarily focus on tackling individual degradations, such as noise, contrast, and blurring, making it difficult to handle coupled degradations. Meanwhile, all-in-one enhancement methods, commonly applied to RGB sensors, often demonstrate limited effectiveness due to the significant differences in imaging models. In sight of this, we first revisit the imaging mechanism and introduce a Recurrent Prompt Fusion Network (RPFN). Specifically, the RPFN initially establishes prompt pairs based on the thermal imaging process. For each type of degradation, we fuse the corresponding prompt pairs to modulate the model's features, providing adaptive guidance that enables the model to better address specific degradations under single or multiple conditions.In addition, a selective recurrent training mechanism is introduced to gradually refine the model's handling of composite cases to align the enhancement process, which not only allows the model to remove camera noise and retain key structural details, but also enhancing the overall contrast of the thermal image. Furthermore, we introduce the most comprehensive high-quality infrared benchmark covering a wide range of scenarios. Extensive experiments substantiate that our approach not only delivers promising visual results under specific degradation but also significantly improves performance on complex degradation scenes, achieving a notable 8.76% improvement.
|
https://openreview.net/forum?id=yEddfz9SgJ
|
Main
|
Poster
|
yEddfz9SgJ
|
Revisiting Semi-Supervised Learning in the Era of Foundation Models
|
[
"Ping Zhang",
"Zheda Mai",
"Quang-Huy Nguyen",
"Wei-Lun Chao"
] |
Semi-supervised learning (SSL) enhances model performance by leveraging abundant unlabeled data alongside limited labeled data. As vision foundation models (VFMs) become central to modern vision applications, this paper revisits SSL in the context of these powerful pre-trained models. We conduct a systematic study on tasks where frozen VFMs underperform and reveal several key insights when fine-tuning them. First, parameter-efficient fine-tuning (PEFT) using only labeled data often surpasses traditional SSL methods---even without access to unlabeled data. Second, pseudo-labels generated by PEFT models offer valuable supervisory signals for unlabeled data, and different PEFT techniques yield complementary pseudo-labels. These findings motivate a simple yet effective SSL baseline for the VFM era: \emph{ensemble pseudo-labeling across diverse PEFT methods and VFM backbones}. Extensive experiments validate the effectiveness of this approach, offering actionable insights into SSL with VFMs and paving the way for more scalable and robust semi-supervised learning in the foundation model era.
|
https://openreview.net/forum?id=yDh9s1qPzB
|
Main
|
Poster
|
yDh9s1qPzB
|
Lie Detector: Unified Backdoor Detection via Cross-Examination Framework
|
[
"Xuan Wang",
"Siyuan Liang",
"Dongping Liao",
"Han Fang",
"Aishan Liu",
"Xiaochun Cao",
"Yu-liang Lu",
"Ee-Chien Chang",
"Xitong Gao"
] |
Institutions with limited data and computing resources often outsource model training to third-party providers in a semi-honest setting, assuming adherence to prescribed training protocols with pre-defined learning paradigm (e.g., supervised or semi-supervised learning). However, this practice can introduce severe security risks, as adversaries may poison the training data to embed backdoors into the resulting model. Existing detection approaches predominantly rely on statistical analyses, which often fail to maintain universally accurate detection accuracy across different learning paradigms. To address this challenge, we propose a unified backdoor detection framework in the semi-honest setting that exploits cross-examination of model inconsistencies between two independent service providers. Specifically, we integrate central kernel alignment to enable robust feature similarity measurements across different model architectures and learning paradigms, thereby facilitating precise recovery and identification of backdoor triggers. We further introduce backdoor fine-tuning sensitivity analysis to distinguish backdoor triggers from adversarial perturbations, substantially reducing false positives. Extensive experiments demonstrate that our method achieves superior detection performance, improving accuracy by 4.4%, 1.7%, and 10.6% over SoTA baselines across supervised, semi-supervised, and autoregressive learning tasks, respectively. Notably, it is the first to effectively detect backdoors in multimodal large language models, further highlighting its broad applicability and advancing secure deep learning.
|
https://openreview.net/forum?id=yBWRrqPwyN
|
Main
|
Poster
|
yBWRrqPwyN
|
FNOPE: Simulation-based inference on function spaces with Fourier Neural Operators
|
[
"Guy Moss",
"Leah Sophie Muhle",
"Reinhard Drews",
"Jakob H. Macke",
"Cornelius Schröder"
] |
Simulation-based inference (SBI) is an established approach for performing Bayesian inference on scientific simulators. SBI so far works best on low-dimensional parametric models. However, it is difficult to infer function-valued parameters, which frequently occur in disciplines that model spatiotemporal processes such as the climate and earth sciences. Here, we introduce an approach for efficient posterior estimation, using a Fourier Neural Operator (FNO) architecture with a flow matching objective. We show that our approach, FNOPE, can perform inference of function-valued parameters at a fraction of the simulation budget of state of the art methods. In addition, FNOPE supports posterior evaluation at arbitrary discretizations of the domain, as well as simultaneous estimation of vector-valued parameters. We demonstrate the effectiveness of our approach on several benchmark tasks and a challenging spatial inference task from glaciology. FNOPE extends the applicability of SBI methods to new scientific domains by enabling the inference of function-valued parameters.
|
https://openreview.net/forum?id=yB5L6ryIkb
|
Main
|
Poster
|
yB5L6ryIkb
|
Coarse-to-Fine 3D Part Assembly via Semantic Super-Parts and Symmetry-Aware Pose Estimation
|
[
"Xinyi Zhang",
"Bingyang Wei",
"Ruixuan Yu",
"Jian Sun"
] |
We propose a novel two-stage framework, Coarse-to-Fine Part Assembly (CFPA), for 3D shape assembly from basic parts. Effective part assembly demands precise local geometric reasoning for accurate component assembly, as well as global structural understanding to ensure semantic coherence and plausible configurations. CFPA addresses this challenge by integrating semantic abstraction and symmetry-aware reasoning into a unified pose prediction process. In the first stage, semantic super-parts are constructed via an optimal transport formulation to capture high-level object structure, which is then propagated to individual parts through a dual-range feature propagation mechanism. The second stage refines part poses via cross-stage feature interaction and instance-level geometric encoding, improving spatial precision and coherence. To enable diverse yet valid assemblies, we introduce a symmetry-aware loss that jointly models both self-symmetry and inter-part geometric similarity, allowing for diverse but structurally consistent assemblies. Extensive experiments on the PartNet benchmark demonstrate that CFPA achieves state-of-the-art performance in assembly accuracy, structural consistency, and diversity across multiple categories.
|
https://openreview.net/forum?id=yAf2Akj1Wm
|
Main
|
Poster
|
yAf2Akj1Wm
|
AlphaBeta is not as good as you think: a simple random games model for a better analysis of deterministic game-solving algorithms
|
[
"Raphael Boige",
"Amine Boumaza",
"Bruno Scherrer"
] |
Deterministic game-solving algorithms are conventionally analyzed in the light of their average-case complexity against a distribution of random game-trees, where leaf values are independently sampled from a fixed distribution. This simplified model enables uncluttered mathematical analysis, revealing two key properties: root value distributions asymptotically collapse to a single fixed value for finite-valued trees, and all reasonable algorithms achieve global optimality. However, these findings are artifacts of the model’s design: its long criticized independence assumption strips games of structural complexity, producing trivial instances where no algorithm faces meaningful challenges. To address this limitation, we introduce a simple probabilistic model that incrementally constructs game-trees using a fixed level-wise conditional distribution. By enforcing ancestor dependencies, a critical structural feature of real-world games, our framework generates problems with adjustable difficulty while retaining some form of analytical tractability. For several algorithms, including AlphaBeta and Scout, we derive recursive formulas characterizing their average-case complexities under this model. These allow us to rigorously compare algorithms on deep game-trees, where Monte-Carlo simulations are no longer feasible. While asymptotically, all algorithms seem to converge to identical branching factor (a result analogous to that of independence-based models), deep finite trees reveal stark differences: AlphaBeta incurs a significantly larger constant multiplicative factor compared to algorithms like Scout, leading to a substantial practical slowdown. Our framework sheds new light on classical game-solving algorithms, offering rigorous evidence and analytical tools to advance the understanding of these methods under a richer, more challenging, and yet tractable model.
|
https://openreview.net/forum?id=yADOHvnzXr
|
Main
|
Poster
|
yADOHvnzXr
|
Provably Efficient Multi-Task Meta Bandit Learning via Shared Representations
|
[
"Jiabin Lin",
"Shana Moothedath"
] |
Learning-to-learn or meta-learning focuses on developing algorithms that leverage prior experience to quickly acquire new skills or adapt to novel environments. A crucial component of meta-learning is representation learning, which aims to construct data representations capable of transferring knowledge across multiple tasks—a critical advantage in data-scarce settings. We study how representation learning can improve the efficiency of bandit problems. We consider $T$ $d$-dimensional linear bandits that share a common low-dimensional linear representation. We provide provably fast, sample-efficient algorithms to address the two key problems in meta-learning: (1) learning a common set of features from multiple related bandit tasks and (2) transferring this knowledge to new, unseen bandit tasks. We validated the theoretical results through numerical experiments using real-world and synthetic datasets, comparing them against benchmark algorithms.
|
https://openreview.net/forum?id=y9zhXirhCa
|
Main
|
Poster
|
y9zhXirhCa
|
PID-controlled Langevin Dynamics for Faster Sampling on Generative Models
|
[
"Hongyi Chen",
"Jianhai Shu",
"Jingtao Ding",
"Yong Li",
"Xiao-Ping Zhang"
] |
Langevin dynamics sampling suffers from extremely low generation speed, fundamentally limited by numerous fine-grained iterations to converge to the target distribution. We introduce PID-controlled Langevin Dynamics (PIDLD), a novel sampling acceleration algorithm that reinterprets the sampling process using control-theoretic principles. By treating energy gradients as feedback signals, PIDLD combines historical gradients (the integral term) and gradient trends (the derivative term) to efficiently traverse energy landscapes and adaptively stabilize, thereby significantly reducing the number of iterations required to produce high-quality samples. Our approach requires no additional training, datasets, or prior information, making it immediately integrable with any Langevin-based method. Extensive experiments across image generation and reasoning tasks demonstrate that PIDLD achieves higher quality with fewer steps, making Langevin-based generative models more practical for efficiency-critical applications. The implementation can be found at \href{https://github.com/tsinghua-fib-lab/PIDLD}{https://github.com/tsinghua-fib-lab/PIDLD}.
|
https://openreview.net/forum?id=y9LHDCKeeN
|
Main
|
Poster
|
y9LHDCKeeN
|
Differentiable Hierarchical Visual Tokenization
|
[
"Marius Aasan",
"Martine Hjelkrem-Tan",
"Nico Catalano",
"Changkyu Choi",
"Adín Ramírez Rivera"
] |
Vision Transformers rely on fixed patch tokens that ignore the spatial and semantic structure of images. In this work, we introduce an end-to-end differentiable tokenizer that adapts to image content with pixel-level granularity while remaining backward-compatible with existing architectures for retrofitting pretrained models. Our method uses hierarchical model selection with information criteria to provide competitive performance in both image-level classification and dense-prediction tasks, and even supports out-of-the-box raster-to-vector conversion.
|
https://openreview.net/forum?id=y8VWYf5cVI
|
Main
|
Spotlight
|
y8VWYf5cVI
|
OVS Meets Continual Learning: Towards Sustainable Open-Vocabulary Segmentation
|
[
"Dongjun Hwang",
"Yejin Kim",
"Minyoung Lee",
"Seong Joon Oh",
"Junsuk Choe"
] |
Open-Vocabulary Segmentation (OVS) aims to segment classes that are not present in the training dataset. However, most existing studies assume that the training data is fixed in advance, overlooking more practical scenarios where new datasets are continuously collected over time. To address this, we first analyze how existing OVS models perform under such conditions. In this context, we explore several approaches such as retraining, fine-tuning, and continual learning but find that each of them has clear limitations. To address these issues, we propose ConOVS, a novel continual learning method based on a Mixture-of-Experts framework. ConOVS dynamically combines expert decoders based on the probability that an input sample belongs to the distribution of each incremental dataset. Through extensive experiments, we show that ConOVS consistently outperforms existing methods across pre-training, incremental, and zero-shot test datasets, effectively expanding the recognition capabilities of OVS models when data is collected sequentially.
|
https://openreview.net/forum?id=y8Hv7EdcRF
|
Main
|
Poster
|
y8Hv7EdcRF
|
ORIGAMISPACE: Benchmarking Multimodal LLMs in Multi-Step Spatial Reasoning with Mathematical Constraints
|
[
"Rui Xu",
"Dakuan Lu",
"Zicheng Zhao",
"Xiaoyu Tan",
"Xintao Wang",
"Siyu Yuan",
"Jiangjie Chen",
"Xu Yinghui"
] |
Spatial reasoning is a key capability in the field of artificial intelligence, especially crucial in areas such as robotics, computer vision, and natural language understanding. However, evaluating the ability of multimodal large language models (MLLMs) in complex spatial reasoning still faces challenges, particularly in scenarios requiring multi-step reasoning and precise mathematical constraints. This paper introduces ORIGAMISPACE, a new dataset and benchmark designed to evaluate the multi-step spatial reasoning ability and the capacity to handle mathematical constraints of MLLMs through origami tasks. The dataset contains 350 data instances, each comprising a strictly formatted crease pattern (CP diagram), the Compiled Flat Pattern, the complete Folding Process, and the final Folded Shape Image. We propose four evaluation tasks: Pattern Prediction, Multi-step Spatial Reasoning, Spatial Relationship Prediction, and End-to-End CP Code Generation. For the CP code generation task, we design an interactive environment and explore the possibility of using reinforcement learning methods to train MLLMs. Through experiments on existing MLLMs, we initially reveal the strengths and weaknesses of these models in handling complex spatial reasoning tasks.
|
https://openreview.net/forum?id=y7ahj9RoXQ
|
Main
|
Spotlight
|
y7ahj9RoXQ
|
DSAS: A Universal Plug-and-Play Framework for Attention Optimization in Multi-Document Question Answering
|
[
"Jiakai Li",
"Rongzheng Wang",
"Yizhuo Ma",
"Shuang Liang",
"Guangchun Luo",
"Ke Qin"
] |
While large language models (LLMs) show considerable promise across various fields, they have notable limitations in handling multi-document question answering (Multi-doc QA) tasks. The first challenge is long-range dependency modeling, where LLMs struggle to focus on key information in long texts, which weakens important semantic connections. Second, most LLMs suffer from the ''lost-in-the-middle'' issue, where they have difficulty processing information in the middle of long inputs. Current solutions either truncate global dependencies or demand costly finetuning, ultimately lacking a universal and simple solution for these challenges. To resolve these limitations, we propose Dual-Stage Adaptive Sharpening (DSAS) containing two modules. (i) The Contextual Gate Weighting (CGW) module alleviates ''lost-in-the-middle'' by assessing paragraph relevance through layer-wise attention tracking and position-aware weighting. (ii) The Reciprocal Attention Suppression (RAS) module enhances focus on critical paragraphs by suppressing information exchange between key and irrelevant texts, thus mitigating the limitations in long-range dependency modeling. Extensive experiments on four benchmarks demonstrate DSAS's efficacy across mainstream LLMs (Llama, Qwen, Mistral, and Deepseek), with an average F1-score improvement of 4.2% in Multi-doc QA tasks on Llama-3.1-8B-Instruct and Qwen2.5-14B-Instruct. Ablation studies confirm the essential contributions of both the CGW and RAS modules. In addition, detailed discussions in the Appendix further validate the robustness and scalability of DSAS.
|
https://openreview.net/forum?id=y68Q09Vc4K
|
Main
|
Poster
|
y68Q09Vc4K
|
Multi-step Visual Reasoning with Visual Tokens Scaling and Verification
|
[
"Tianyi Bai",
"Zengjie Hu",
"Fupeng Sun",
"Qiu Jiantao",
"Yizhen Jiang",
"Guangxin He",
"Bohan Zeng",
"Conghui He",
"Binhang Yuan",
"Wentao Zhang"
] |
Multi-modal large language models (MLLMs) have achieved remarkable capabilities by integrating visual perception with language understanding, enabling applications such as image-grounded dialogue, visual question answering, and scientific analysis. However, most MLLMs adopt a static inference paradigm, encoding the entire image into fixed visual tokens upfront, which limits their ability to iteratively refine understanding or adapt to context during inference. This contrasts sharply with human perception, which is dynamic, selective, and feedback-driven.
In this work, we introduce a novel framework for inference-time visual token scaling that enables MLLMs to perform iterative, verifier-guided reasoning over visual content. We formulate the problem as a Markov Decision Process, involving a reasoner that proposes visual actions and a verifier—trained via multi-step Direct Preference Optimization (DPO)—that evaluates these actions and determines when reasoning should terminate. To support this, we present a new dataset, VTS, comprising supervised reasoning trajectories (VTS-SFT) and preference-labeled reasoning comparisons (VTS-DPO).
Our method significantly outperforms existing approaches across diverse visual reasoning benchmarks, offering not only improved accuracy but also more interpretable and grounded reasoning processes. These results demonstrate the promise of dynamic inference mechanisms for enabling fine-grained, context-aware visual reasoning in next-generation MLLMs. Code and datasets are publicly released at https://vts-v.github.io/.
|
https://openreview.net/forum?id=y60FhgO07j
|
Main
|
Poster
|
y60FhgO07j
|
Attention with Trained Embeddings Provably Selects Important Tokens
|
[
"Diyuan Wu",
"Aleksandr Shevchenko",
"Samet Oymak",
"Marco Mondelli"
] |
Token embeddings play a crucial role in language modeling but, despite this practical relevance, their theoretical understanding is limited. Our paper addresses the gap by characterizing the structure of embeddings obtained via gradient descent. Specifically, we consider a one-layer softmax attention model with a linear head for binary classification, i.e., $\mathrm{Softmax}( p^\top E_X^\top ) E_X v = \frac{ \sum_{i=1}^T \exp(p^\top E_{x_i}) E_{x_i}^\top v}{\sum_{j=1}^T \exp(p^\top E_{x_{j}}) }$, where $E_X = [ E_{x_1} , \dots, E_{x_T} ]^\top$ contains the embeddings of the input sequence, $p$ is the embedding of the $\mathrm{\langle cls \rangle}$ token and $v$ the output vector. First, we show that, already after a single step of gradient training with the standard logistic loss, the embeddings $E_X$ capture the importance of tokens in the dataset by aligning with the output vector $v$ proportionally to the corresponding average signed frequency that captures the relevance of tokens to the labels. Then, after training $p$ via gradient flow until convergence, the softmax selects the important tokens in the sentence (i.e., those that are predictive of the label), and the resulting $\mathrm{\langle cls \rangle}$ embedding maximizes the margin for such a selection. Experiments on real-world datasets (IMDB, Yelp) exhibit a phenomenology close to that unveiled by our theory.
|
https://openreview.net/forum?id=y5IUGnpDJ8
|
Main
|
Poster
|
y5IUGnpDJ8
|
Asymptotic theory of SGD with a general learning-rate
|
[
"Or Goldreich",
"Ziyang Wei",
"SOHAM BONNERJEE",
"Jiaqi Li",
"Wei Biao Wu"
] |
Stochastic gradient descent (SGD) with polynomially decaying step‐sizes has long underpinned theoretical analyses, yielding a broad spectrum of statistically attractive guarantees. Yet in practice, such schedules find rare use due to their prohibitively slow convergence, revealing a persistent gap between theory and empirical performance. In this paper, we introduce a unified framework that quantifies the uncertainty of online SGD under arbitrary learning‐rate choices. In particular, we provide the first comprehensive convergence characterizations for two widely used but theoretically under-examined schemes—cyclical learning rates and linear decay to zero. Our results not only explain the observed behavior of these schedules but also facilitate principled tools for statistical inference and algorithm design. All theoretical findings are corroborated by extensive simulations across diverse settings.
|
https://openreview.net/forum?id=y5Diyh9XEQ
|
Main
|
Poster
|
y5Diyh9XEQ
|
Optimal Best Arm Identification under Differential Privacy
|
[
"Marc Jourdan",
"Achraf Azize"
] |
Best Arm Identification (BAI) algorithms are deployed in data-sensitive applications, such as adaptive clinical trials or user studies. Driven by the privacy concerns of these applications, we study the problem of fixed-confidence BAI under global Differential Privacy (DP) for Bernoulli distributions. While numerous asymptotically optimal BAI algorithms exist in the non-private setting, a significant gap remains between the best lower and upper bounds in the global DP setting. This work reduces this gap to a small multiplicative constant, for any privacy budget $\epsilon$. First, we provide a tighter lower bound on the expected sample complexity of any $\delta$-correct and $\epsilon$-global DP strategy. Our lower bound replaces the Kullback–Leibler (KL) divergence in the transportation cost used by the non-private characteristic time with a new information-theoretic quantity that optimally trades off between the KL divergence and the Total Variation distance scaled by $\epsilon$. Second, we introduce a stopping rule based on these transportation costs and a private estimator of the means computed using an arm-dependent geometric batching. En route to proving the correctness of our stopping rule, we derive concentration results of independent interest for the Laplace distribution and for the sum of Bernoulli and Laplace distributions. Third, we propose a Top Two sampling rule based on these transportation costs. For any budget $\epsilon$, we show an asymptotic upper bound on its expected sample complexity that matches our lower bound to a multiplicative constant smaller than $8$. Our algorithm outperforms existing $\delta$-correct and $\epsilon$-global DP BAI algorithms for different values of $\epsilon$.
|
https://openreview.net/forum?id=y4AXO2pFAh
|
Main
|
Poster
|
y4AXO2pFAh
|
Differentially Private Quantiles with Smaller Error
|
[
"Jacob Imola",
"Fabrizio Boninsegna",
"Hannah Keller",
"Anders Aamand",
"Amrita Roy Chowdhury",
"Rasmus Pagh"
] |
In the approximate quantiles problem, the goal is to output $m$ quantile estimates, the ranks of which are as close as possible to $m$ given quantiles $0 \leq q_1 \leq\dots \leq q_m \leq 1$.
We present a mechanism for approximate quantiles that satisfies $\varepsilon$-differential privacy for a dataset of $n$ real numbers where the ratio between the distance between the closest pair of points and the size of the domain is bounded by $\psi$.
As long as the minimum gap between quantiles is sufficiently large, $|q_i-q_{i-1}|\geq \Omega\left(\frac{m\log(m)\log(\psi)}{n\varepsilon}\right)$ for all $i$, the maximum rank error of our mechanism is $O\left(\frac{\log(\psi) + \log^2(m)}{\varepsilon}\right)$ with high probability.
Previously, the best known algorithm under pure DP was due to Kaplan, Schnapp, and Stemmer (ICML '22), who achieved a bound of $O\left(\frac{\log(\psi)\log^2(m) + \log^3(m)}{\varepsilon}\right)$.
Our improvement stems from the use of continual counting techniques which allows the quantiles to be randomized in a correlated manner.
We also present an $(\varepsilon,\delta)$-differentially private mechanism that relaxes the gap assumption without affecting the error bound, improving on existing methods when $\delta$ is sufficiently close to zero.
We provide experimental evaluation which confirms that our mechanism performs favorably compared to prior work in practice, in particular when the number of quantiles $m$ is large.
|
https://openreview.net/forum?id=y3Q3nod80m
|
Main
|
Poster
|
y3Q3nod80m
|
DreamLight: Towards Harmonious and Consistent Image Relighting
|
[
"Yong Liu",
"Wenpeng Xiao",
"Qianqian Wang",
"Junlin Chen",
"Shiyin Wang",
"Yitong Wang",
"Xinglong Wu",
"Yansong Tang"
] |
We introduce a model named DreamLight for universal image relighting in this work, which can seamlessly composite subjects into a new background while maintaining aesthetic uniformity in terms of lighting and color tone. The background can be specified by natural images (image-based relighting) or generated from unlimited text prompts (text-based relighting). Existing studies primarily focus on image-based relighting, while with scant exploration into text-based scenarios. Some works employ intricate disentanglement pipeline designs relying on environment maps to provide relevant information, which grapples with the expensive data cost required for intrinsic decomposition and light source. Other methods take this task as an image translation problem and perform pixel-level transformation with autoencoder architecture. While these methods have achieved decent harmonization effects, they struggle to generate realistic and natural light interaction effects between the foreground and background. To alleviate these challenges, we reorganize the input data into a unified format and leverage the semantic prior provided by the pretrained diffusion model to facilitate the generation of natural results. Moreover, we propose a Position-Guided Light Adapter (PGLA) that condenses light information from different directions in the background into designed light query embeddings, and modulates the foreground with direction-biased masked attention. In addition, we present a post-processing module named Spectral Foreground Fixer (SFF) to adaptively reorganize different frequency components of subject and relighted background, which helps enhance the consistency of foreground appearance. Extensive comparisons and user study demonstrate that our DreamLight achieves remarkable relighting performance.
|
https://openreview.net/forum?id=y2wt5c1Uhu
|
Main
|
Poster
|
y2wt5c1Uhu
|
Speculative Jacobi-Denoising Decoding for Accelerating Autoregressive Text-to-image Generation
|
[
"Yao Teng",
"Fu-Yun Wang",
"Xian Liu",
"Zhekai Chen",
"Han Shi",
"Yu Wang",
"Zhenguo Li",
"Weiyang Liu",
"Difan Zou",
"Xihui Liu"
] |
As a new paradigm of visual content generation, autoregressive text-to-image models suffer from slow inference due to their sequential token-by-token decoding process, often requiring thousands of model forward passes to generate a single image. To address this inefficiency, we propose Speculative Jacobi-Denoising Decoding (SJD2), a framework that incorporates the denoising process into Jacobi iterations to enable parallel token generation in autoregressive models. Our method introduces a next-clean-token prediction paradigm that enables the pre-trained autoregressive models to accept noise-perturbed token embeddings and predict the next clean tokens through low-cost fine-tuning. This denoising paradigm guides the model towards more stable Jacobi trajectories. During inference, our method initializes token sequences with Gaussian noise and performs iterative next-clean-token-prediction in the embedding space. We employ a probabilistic criterion to verify and accept multiple tokens in parallel, and refine the unaccepted tokens for the next iteration with the denoising trajectory. Experiments show that our method can accelerate generation by reducing model forward passes while maintaining the visual quality of generated images.
|
https://openreview.net/forum?id=y2eWc6jrlu
|
Main
|
Poster
|
y2eWc6jrlu
|
Accurate and Efficient Low-Rank Model Merging in Core Space
|
[
"Aniello Panariello",
"Daniel Marczak",
"Simone Magistri",
"Angelo Porrello",
"Bartłomiej Twardowski",
"Andrew D. Bagdanov",
"Simone Calderara",
"Joost van de Weijer"
] |
In this paper, we address the challenges associated with merging low-rank adaptations of large neural networks. With the rise of parameter-efficient adaptation techniques, such as Low-Rank Adaptation (LoRA), model fine-tuning has become more accessible. While fine-tuning models with LoRA is highly efficient, existing merging methods often sacrifice this efficiency by merging fully-sized weight matrices. We propose the Core Space merging framework, which enables the merging of LoRA-adapted models within a common alignment basis, thereby preserving the efficiency of low-rank adaptation while substantially improving accuracy across tasks. We further provide a formal proof that projection into Core Space ensures no loss of information and provide a complexity analysis showing the efficiency gains. Extensive empirical results demonstrate that Core Space significantly improves existing merging techniques and achieves state-of-the-art results on both vision and language tasks while utilizing a fraction of the computational resources. Codebase is available at https://github.com/apanariello4/core-space-merging.
|
https://openreview.net/forum?id=y1z7SAS8q8
|
Main
|
Poster
|
y1z7SAS8q8
|
Continuous Thought Machines
|
[
"Luke Nicholas Darlow",
"Ciaran Regan",
"Sebastian Risi",
"Jeffrey Seely",
"Llion Jones"
] |
Biological brains demonstrate complex neural activity, where neural dynamics are critical to how brains process information. Most artificial neural networks ignore the complexity of individual neurons. We challenge that paradigm. By incorporating neuron-level processing and synchronization, we reintroduce neural timing as a foundational element. We present the Continuous Thought Machine (CTM), a model designed to leverage neural dynamics as its core representation. The CTM has two innovations: (1) neuron-level temporal processing}, where each neuron uses unique weight parameters to process incoming histories; and (2) neural synchronization as a latent representation. The CTM aims to strike a balance between neuron abstractions and biological realism. It operates at a level of abstraction that effectively captures essential temporal dynamics while remaining computationally tractable. We demonstrate the CTM's performance and versatility across a range of tasks, including solving 2D mazes, ImageNet-1K classification, parity computation, and more. Beyond displaying rich internal representations and offering a natural avenue for interpretation owing to its internal process, the CTM is able to perform tasks that require complex sequential reasoning. The CTM can also leverage adaptive compute, where it can stop earlier for simpler tasks, or keep computing when faced with more challenging instances. The goal of this work is to share the CTM and its associated innovations, rather than pushing for new state-of-the-art results. To that end, we believe the CTM represents a significant step toward developing more biologically plausible and powerful artificial intelligence systems. We provide an accompanying [interactive online demonstration](https://pub.sakana.ai/ctm/) and an [extended technical report](https://pub.sakana.ai/ctm/paper).
|
https://openreview.net/forum?id=y0wDflmpLk
|
Main
|
Spotlight
|
y0wDflmpLk
|
Hierarchical Fine-grained Preference Optimization for Physically Plausible Video Generation
|
[
"Harold Haodong Chen",
"Haojian Huang",
"Qifeng Chen",
"Harry Yang",
"Ser-Nam Lim"
] |
Recent advancements in video generation have enabled the creation of high-quality, visually compelling videos. However, generating videos that adhere to the laws of physics remains a critical challenge for applications requiring realism and accuracy. In this work, we propose **PhysHPO**, a novel framework for Hierarchical Cross-Modal Direct Preference Optimization, to tackle this challenge by enabling fine-grained preference alignment for physically plausible video generation. PhysHPO optimizes video alignment across four hierarchical granularities: a) ***Instance Level***, aligning the overall video content with the input prompt; b) ***State Level***, ensuring temporal consistency using boundary frames as anchors; c) ***Motion Level***, modeling motion trajectories for realistic dynamics; and d) ***Semantic Level***, maintaining logical consistency between narrative and visuals. Recognizing that real-world videos are the best reflections of physical phenomena, we further introduce an automated data selection pipeline to efficiently identify and utilize *"good data"* from existing large-scale text-video datasets, thereby eliminating the need for costly and time-intensive dataset construction. Extensive experiments on both physics-focused and general capability benchmarks demonstrate that PhysHPO significantly improves physical plausibility and overall video generation quality of advanced models. To the best of our knowledge, this is the first work to explore fine-grained preference alignment and data selection for video generation, paving the way for more realistic and human-preferred video generation paradigms.
|
https://openreview.net/forum?id=y0SRR9XGlZ
|
Main
|
Poster
|
y0SRR9XGlZ
|
Accelerating Diffusion LLMs via Adaptive Parallel Decoding
|
[
"Daniel Mingyi Israel",
"Guy Van den Broeck",
"Aditya Grover"
] |
The generation speed of LLMs are bottlenecked by autoregressive decoding, where tokens are predicted sequentially one by one. Alternatively, diffusion large language models (dLLMs) theoretically allow for parallel token generation, but in practice struggle to achieve the speed of autoregressive models without significantly sacrificing quality. We therefore introduce adaptive parallel decoding (APD), a novel method that dynamically adjusts the number of tokens sampled in parallel. We achieve this by defining a multiplicative mixture between the dLLM marginal probabilities and the joint probability of sequences under a small auxiliary autoregressive model. This inverts the standard setup of speculative decoding, where the goal is to sample from a large autoregressive verifier by drafting from a smaller model. We further optimize APD by enabling KV caching and limiting the size of the masked input. Altogether, our method puts forward three tunable parameters to flexibly tradeoff throughput and quality. We show that APD provides markedly higher throughput with minimal quality degradations on downstream benchmarks.
|
https://openreview.net/forum?id=xwqTt26NJf
|
Main
|
Spotlight
|
xwqTt26NJf
|
DyMU: Dynamic Merging and Virtual Unmerging for Efficient Variable-Length VLMs
|
[
"Zhenhailong Wang",
"Senthil Purushwalkam",
"Caiming Xiong",
"Silvio Savarese",
"Heng Ji",
"Ran Xu"
] |
We present DyMU, an efficient, training-free framework that dynamically reduces the computational burden of vision-language models (VLMs) while maintaining high task performance. Our approach comprises two key components. First, Dynamic Token Merging (DToMe) reduces the number of visual token embeddings by merging similar tokens based on image complexity, addressing the inherent inefficiency of fixed-length outputs in vision transformers. Second, Virtual Token Unmerging (VTU) simulates the expected token sequence for large language models (LLMs) by efficiently reconstructing the attention dynamics of a full sequence, thus preserving the downstream performance without additional fine-tuning.
Unlike previous approaches, our method dynamically determines token length based on the *image content*—not just resolution—and operates completely training-free, making it readily applicable to most state-of-the-art VLM architectures. Extensive experiments on image and video understanding tasks, demonstrate that DyMU can reduce the average visual token count by 32%-85% while achieving comparable performance to full-length models, across diverse VLM architectures. Furthermore, qualitative analyses show that the adaptive token reduction from DToMe aligns well with human perception and enables users to better control computational costs through flexible integration with additional vision tools and models.
|
https://openreview.net/forum?id=xvxgG668th
|
Main
|
Poster
|
xvxgG668th
|
A Novel General Framework for Sharp Lower Bounds in Succinct Stochastic Bandits
|
[
"Guo Zeng",
"Jean Honorio"
] |
Many online learning applications adopt the stochastic bandit problem with a linear reward model, where the unknown parameter exhibits a succinct structure. We study minimax regret lower bounds which allow to know whether more efficient algorithms can be proposed. We introduce a general definition of succinctness and propose a novel framework for constructing minimax regret lower bounds based on an information-regret trade-off. When applied to entry-sparse vectors, our framework sharpens a recent lower bound by (Hao et al, NeurIPS 2020). We further apply our framework to derive novel results. To the best of our knowledge, we provide the first lower bounds for the group-sparse and low-rank matrix settings.
|
https://openreview.net/forum?id=xvsPQuUHef
|
Main
|
Poster
|
xvsPQuUHef
|
Unlocking SLM Potential for Data Analysis Code Generation via Non-Parametric Knowledge Distillation
|
[
"Jinyang Li",
"Jack Williams",
"Nick McKenna",
"Arian Askari",
"Nicholas Wilson",
"Reynold Cheng"
] |
Knowledge distillation from Large Language Models (LLMs) to locally hosted Small Language Models (SLMs) provides advantages for Data Analysis Code Generation (DACG) such as privacy protection. However, achieving effective distillation without resource-intensive training is challenging. This paper investigates whether LLMs can distill knowledge to SLMs through In-Context Learning (ICL), a training-free method for rapid task adaptation. We present the DarGO: Distillation and Adaptive Reasoning-Guided Orchestration framework, which facilitates automatic knowledge distillation from LLMs to SLMs. DarGO consists of three phases: exploration through an Model Orchestration Interface (MOI), Memory Collection of successful trajectories, and Knoweldge-driven Inference. We evaluate DarGO on three challenging DACG benchmarks (WikiTQ, TabMWP, and Bird-SQL), each with in-domain training sets that enable detailed analysis of knowledge distillation effectiveness. DarGO demonstrates a substantial relative performance improvement of 27.5\% on average for the student SLMs. To further observe generalization capabilities, we evaluate the \method across different teacher-student model combinations, knowledge transfer scenarios, and unified memory approaches for more advanced, test-only data analysis tasks. Our findings contribute a novel perspective on distillation methods that enhance high performance for SLMs while avoiding intensive fine-tuning.
|
https://openreview.net/forum?id=xud9JYzgSp
|
Main
|
Poster
|
xud9JYzgSp
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.