Paper_ID
stringlengths
10
10
Question
stringlengths
201
1.81k
ocr_output
stringlengths
252
54k
rEQ8OiBxbZ
Could you elaborate on how the local structures are reconstructed? What serves as the input for this process: a single embedding from the TokenGT-3D output, or a collection of embeddings from local structure segmentations within a single molecule?
3D Molecular Pretraining via Localized Geometric Generation Anonymous authors Paper under double-blind review Abstract Self-supervised learning on 3D molecular structures has gained prominence in AI-driven drug discovery due to the high cost of annotating biochemical data. However, few have studied the selection of proper modeling semantic units within 3D molecular data, which is critical for an expressive pre-trained model as verified in natural language processing and computer vision. In this study, we introduce Localized Geometric Generation (LEGO), a novel approach that treats tetrahedrons within 3D molecular structures as fundamental modeling blocks, leveraging their simplicity in three-dimension and their prevalence in molecular structural patterns such as carbon skeletons and functional groups. Inspired by masked language/image modeling, LEGO perturbs a portion of tetrahedrons and learns to reconstruct them during pretraining. The reconstruction of the noised local structures can be divided into a two-step process, namely spatial orientation prediction and internal arrangement generation. First, we predict the global orientation of the noised local structure within the whole molecule, equipping the model with positional information for these foundational components. Then, we geometrically reconstruct the internal arrangements of the noised local structures revealing their functional semantics. To address the atom-bond inconsistency problem in previous denoising methods and utilize the prior of chemical bonds, we propose to model the graph as a set of nodes and edges and explicitly generate the edges during pre-training. In this way, LEGO exploits the advantages of encoding structural geometry features as well as leveraging the expressiveness of self-supervised learning. Extensive experiments on molecular quantum and biochemical property prediction tasks demonstrate the effectiveness of our approach. 1 Introduction Understanding 3D molecular structures is crucial for various tasks in drug discovery, such as molecular property prediction [Wu et al., 2018; Hu et al., 2021; Chmiela et al., 2023], binding affinity prediction [Öztürk et al., 2018; Ru et al., 2022], and docking-based generation [Ma et al., 2021; Yang et al., 2021]. In recent years, self-supervised learning on 3D molecular structures has been extensively explored to learn from large collections of unlabeled compounds, which helps overcome the costly and time-consuming process of annotating biochemical properties. As is demonstrated in natural language processing and computer vision, a careful selection of minimal semantic building blocks is critical for developing an expressive and robust pretrained model. By providing well-structured units, the model can effectively identify underlying patterns and extract meaningful semantics from data compositions during pretraining. However, few existing 3D molecular pretraining methods have studied this aspect. Existing 3D molecular pretraining methods fall into two categories: representation-level and structure-level. Representation-level methods aim to enhance 2D molecular representation by leveraging information from 3D molecular structures through contrastive learning [Liu et al., 2021a; Stärk et al., 2022]. Such methods use 3D molecular structures only at the encoding stage and fail to model inherent structural features through self-supervised training. Structure-level methods address this limitation by developing pre-training tasks of coordinate denoising, where independent noise is added to the coordinates of all atoms in the graph and the model is trained to reconstruct the original atomic positions [Zaidi et al., 2022; Liu et al., 2022b; Zhou et al., 2023; Jiao et al., 2023; Feng et al., 2023]. However, from a chemical perspective, an atom alone can hardly serve as a functional... Figure 1: Local structures consisting of a central atom and its one-hop neighbors form a highly prevalent motif in molecules, which underlies (a) carbon backbones, and (b) functional groups, and etc. unit in molecules. Therefore, atom-wise denoising provides limited improvement in the model’s understanding of functional substructures. In this paper, we focus on this open issue and propose a novel pretraining approach as an initial exploration. Our method, called Localized Geometric Generation (LEGO), treats tetrahedrons within 3D molecular structures as fundamental building blocks and tailors two pretraining tasks to learn the semantics. There are two key conceptual motivations behind this design: Geometrically, the tetrahedron is the simplest polyhedron that can be constructed in 3D Euclidean space, serving as the base case for more complex polyhedra. This structural simplicity and primitiveness aligns with the ubiquity of the tetrahedral motif in chemistry: a central atom along with its one-hop neighbors forms a highly prevalent local structure in molecules, which underlies carbon backbones, functional groups, and more (Fig 1). Therefore, tetrahedrons can be considered an excellent basic semantic unit for 3D molecular modeling from both geometry and chemistry. Inspired by masked language/image modeling techniques (Devlin et al., 2019; Dosovitskiy et al., 2020), LEGO introduces perturbations to a portion of tetrahedrons in a 3D molecular structure and learns to reconstruct them during pretraining. In particular, we begin by segmenting a 3D molecular structure into a non-overlapping stack of one-hop local tetrahedral structures. Subsequently, we add noise or apply masks to part of the segmented local structures. The reconstruction of the perturbed local structures involves two steps: global orientation prediction and local structure generation. During the orientation prediction step, we predict the spherical coordinates of the center of mass (CoM) for each masked tetrahedron. This prediction provides positional information about local structures and their relationships within the whole molecule. While for the local generation, we introduce a geometric generation task to accurately reconstruct atom arrangements within each masked tetrahedron, which focuses on learning the pattern and semantic of the unit itself. By incorporating these steps, LEGO is able to learn both global and local features of 3D molecular geometry in a self-supervised manner. Although the design mentioned above allows for the explicit modeling of geometric features in 3D molecular data, it is important to note that most existing 3D molecular graph models are based on nodes, where edges are represented as additional node features and not explicitly modeled. Such backbones can lead to an atom-bond inconsistency problem during the denoising-generation process generation (Peng et al., 2023). To be specific, when generating 3D structures, atom-based networks first produce atom positions and add the chemical bonds in a post-processing manner. This sequential approach may result in intermediate atom positions that are not feasible for forming bonds, leading to unrealistic topologies like extra-large ring or violate atom valency constraints. This atom-bond inconsistency presents a challenge for our pretraining approach, which focuses on reconstructing local molecular structures. In fact, bonds are critical abstract concepts in molecules as they quantify distance-dependent interaction forces between atoms and encoding key chemical semantics, and therefore play a critical role in modeling molecular local structures. To address the inconsistency, we propose modeling the molecular graph as a set of nodes and edges. During pretraining, LEGO generates the edges explicitly, allowing it to learn the significant chemical and geometric priors embedding in the bonding patterns. The contributions of this work can be summarized as follows: • We propose a novel self-supervised learning method for 3D molecular structures. Our approach treats tetrahedrons as the fundamental building blocks within 3D structures and introduces two pretraining tasks that enable the learning of local and global semantics in a geometric manner. • We address the atom-bond inconsistency problem encountered in previous denoising methods by modeling the molecular graph as a set of nodes and edges. This representation leverages the prior knowledge of chemical bonds, facilitating the accurate representation of molecular structures. • We demonstrate the effectiveness of our method through comprehensive experiments. We pretrain LEGO on a large-scale dataset and evaluate the pretrained model on biochemical and quantum property prediction tasks. The results show that our approach can well capture the molecular functional semantics and can achieve comparing results to Transformer variants with sophisticated graph-specific inductive bias. 2 RELATED WORKS 3D Molecular Structure Modeling. 3D modeling of molecular structures has been extensively explored in recent years, enabled by advancements in graph neural networks (GNN) (Wu et al., 2020; Han et al., 2022). Early work by SchNet (Schütt et al., 2017) incorporates atomic distances into continuous-filter convolutional layers to capture local atomic correlations. DimeNet (Klicpera et al., 2020) pioneers the incorporation of bond angles and directionality into vanilla GNNs, demonstrating improved performance. SphereNet (Liu et al., 2021b) and ComENet (Wang et al., 2022) introduce spherical messages to build more informative representations. To encode 3D equivariance as an inductive bias grounded in group theory, Tensor Field Networks (Thomas et al., 2018), SE(3)-Transformers (Fuchs et al., 2020) and NequiP (Batzner et al., 2022) employ tensor products, while PaiNN (Schütt et al., 2021) and EGNN (Satorras et al., 2021) adopt equivariant message passing. Beyond message passing neural networks (MPNN), the powerful transformer architecture (Vaswani et al., 2017) has also been explored for graph-structured data. Dwivedi & Bresson (2020) first introduces a fully-connected transformer for graphs and uses Laplacian eigenvectors as node positional encoding. GRPE (Park et al., 2022) and Graphormer (Ying et al., 2021) define structural positional encodings based on node topology, node-edge interaction and 3D distances. Besides positional encodings, GraphTrans (Wu et al., 2021), EGT (Hussain et al., 2022) and GraphGPS (Rampášek et al., 2022) propose hybrid architectures with stacked MPNN layers before the global attention layer. Notably, TokenGT (Kim et al., 2022) demonstrated that standard Transformers without graph-specific modifications can also achieve promising results in graph learning. Despite the success by directly incorporating 3D features into the model input, there remains a need to develop pretraining paradigms for 3D molecular structures that can learn semantic features in a self-supervised manner. Pretraining on 3D Molecular Structures. Existing pre-training methods for 3D molecular structures can be categorized into two types: representation-level and structure-level. Representation-level methods use separate encoders to embed 2D graphs and 3D structures to obtain embeddings from two views, then perform contrastive learning (Stark et al., 2022) or generative self-supervised learning (Liu et al., 2021a) on the two embeddings. Such methods focus on the 2D graph representation and treat 3D information as a complement to its 2D counterpart, ignoring spatial features that are more informative in determining molecular properties. Structure-level denoising tasks fill this gap by involving geometric elements in pretraining tasks. Liu et al. (2022b), Zaidi et al. (2022), Zhou et al. (2023), and Feng et al. (2023) employ denoising tasks on atomic coordinates and explore how the scale and distribution of the added noise impact the results. Zhu et al. (2022) proposes a masked modeling by predicting coordinates of masked atoms using corresponding 2D features. GEM (Fang et al., 2022) and 3D-PGT (Wang et al., 2023) use geometric features as pretraining objectives, but they implement a random masking. Different from these studies, we underscores the modeling of local semantic units in 3D molecular pretraining. 3 METHOD 3.1 MOTIVATION Our objective is to develop a segmentation approach that effectively decomposes 3D molecular structures into suitable units for representation learning. These units need to strike a balance between two crucial factors. On one hand, the units should encapsulate the critical details related to the local molecular environment in a way that downstream models can further analyze for property predictions. On the other hand, overly complex or molecule-specific representations could limit the applicability of Figure 2: Overview of LEGO. **I.** Based on non-terminal atoms, we segment 3D molecular structures into building blocks of one-hop local structures (LS). We perturb a portion of the LS by adding noise to atomic positions and masking the edge features. **II.** We pre-train LEGO by geometrically reconstructing the perturbed local structures in two stages. the approach across different chemical spaces. Therefore, we aim to identify structurally meaningful yet simple decompositions that contain rich semantics similar to how tokens and patches serve as universal elements for natural language processing and computer vision models. Our proposed solution is to take tetrahedrons (one-hop local structures in general cases) as the fundamental building blocks. Geometrically, the tetrahedron is the simplest polyhedron that can be constructed in 3D space, serving as the base case for more complex polyhedra. This structural simplicity aligns with the widespread occurrence of the tetrahedral motif in chemical compounds, as depicted in Figure 1. In carbon skeletons and many functional groups, tetrahedral centers with a maximum valency of four allow diverse atoms to form intricate molecular structures while minimizing spatial constraints. It is worth pointing out that the local structure of actual molecules may not always conform to a standard tetrahedral shape, and our segmentation strategy is adjusted to accommodate this variability. For center atoms with fewer than four neighbors, like the C,N,O in Fig 1(b), we simply treat the ketone, amino or the ether as a degraded tetrahedra. While for instances where center atoms form more than four bonds, such as sulfur and phosphorus, we incorporate all one-hop atoms as part of the local structure. Additionally, cyclic structures like benzene are handled by selecting non-adjacent carbons to represent the ring through a combination of its triangular fragments. By retaining this adaptive nature for atypical cases while concentrating on tetrahedra, the algorithm aims to balance simplicity and practical applicability across diverse chemical spaces. ### 3.2 TokenGT and Its 3D Extension Most existing graph neural networks typically adopt an atom-centric approach, where edge features are encoded as additional attributes and then aggregated to atoms through message passing. However, in the field of chemistry, chemical bonds play a crucial role as they abstract distance-based interatomic forces and provide essential chemical priors in local structure modeling. Neglecting the consideration of edges in molecular generation can lead to the problem of atom-bond inconsistency, resulting in the generation of undesirable molecular structures, as demonstrated by Peng et al. (2023) and Qiang et al. (2023). In order to mitigate potential negative effects of atom-based modeling on our generative pre-training approach, in this section, we will provide a brief overview of the architecture of TokenGT and discuss a minor improvement that we propose to adapt it to 3D data. TokenGT TokenGT, short for Tokenized Graph Transformer, has been both theoretically and empirically shown to yield promising results in graph learning. It has been demonstrated that by incorporating augmented embeddings, standard Transformers can effectively handle graph data without requiring extensive graph-specific modifications (Kim et al., 2022). Given an input graph \( G = (V, E) \), TokenGT first initializes the node set \( V = \{v_1, ..., v_n\} \) and the edge set \( E = \{e_1, ..., e_m\} \) as \( X^V \in \mathbb{R}^{n \times d}, X^E \in \mathbb{R}^{m \times d} \). Then, each token in \( X \) is augmented with predefined orthonormal token identifiers to represent graph connectivity, and trainable type identifiers to encode whether a token is a node or an edge. Token Identifier Given an input graph \( G = (V, E) \), \( n \) node-wise orthonormal vectors \( P \in \mathbb{R}^{n \times d_p} \) are produced and concatenated after the token embeddings, i.e. for node \( v \in V \), the token \( X_v \) is augmented as \([X_v, P_v]\); for edge \((u, v) \in E \), the token \( X_{(u,v)} \) is augmented as \([X_{(u,v)}, P_u, P_v]\). With orthogonality, a Transformer can tell whether an edge \( e = (u, v) \) is connected with a node \( k \) through dot-product (attention) since \([P_u, P_v][P_k, P_k]^T = 1\) if and only if \( k \in (u, v) \) and 0 otherwise. Through this design, TokenGT is able to incorporate the connectivity between nodes and edges. For more theoretical analysis of completeness and informativeness of these token identifiers, please refer to the original paper. Type Identifier Given an input graph \( G = (V, E) \), TokenGT applies a trainable matrix \( E = [E^V; E^E] \in \mathbb{R}^{2 \times d_e} \) to augment the tokens as follows: for node \( v \in V \), the token \([X_v, P_v, E^V_v]\); for edge \((u, v) \in E \), the token \([X_{(u,v)}, P_u, P_v, E^E]\). With token identifiers and type identifiers, the initialized token embeddings \( X = [X^V \in \mathbb{R}^{n \times d}, X^E \in \mathbb{R}^{m \times d}] \in \mathbb{R}^{(n+m) \times (d+2d_p+d_e)} \) are augmented to \( X^{in} \in \mathbb{R}^{(n+m) \times (d+2d_p+d_e)} \). Then, TokenGT passes the input to a standard Transformer encoder with vanilla multi-head self-attention layers, where a \([\text{CLS}]\) token is additionally concatenated to obtain the graph embedding for downstream finetuning. 3D Extension To align with our geometric pretraining objectives, we propose a minor extension of the original 2D TokenGT formulation to accommodate 3D molecular graphs. Let \( G = (V, E, P) \) be a 3D graph, where \( P = \{p_1, ..., p_n\}, p_i \in \mathbb{R}^{n \times 3} \) is the set of atom cartesian coordinates, we augment the initial embedding \( X_{(u,v)} \) of edge \( e_{(u,v)} \) with bond length, bond angles, and the dihedral angles related to \( e_{(u,v)} \) with a radial/spherical harmonics basis function \( e_{\text{RBF}}/e_{\text{SBF}} \): - Bond length: \( X_{bl(u,v)} = e_{\text{RBF}}(\|p_v - p_u\|) \) - Bond angle: \( X_{ba(u,v)} = \sum_k a_{\text{SBF}}^{(uv,uk)}, k \in N(u) \setminus v \) - Dihedral angle: \( X_{da(u,v)} = \sum_{k,j} a_{\text{SBF}}^{(kuv,uvj)}, k \in N(u) \setminus v, j \in N(v) \setminus u \) - Augmented edge embedding: \( X_{3D(u,v)} = X_{(u,v)} + X_{bl(u,v)} + X_{ba(u,v)} + X_{da(u,v)} \) Algorithm 1 Local Structure Reconstruction in LEGO Require: - \( G \): Input graph \( G = (V, E, P) \) with \( n \) nodes and \( m \) edges. - \( M_{\text{center}} = \delta^n \), \( M_{\text{edge}} = \delta^m \), \( M_{\text{leaf}} = \delta^n \), \( \delta \in \{0, 1\} \): Mask indicators for center atoms, edges, leaf atoms. - \( \text{Emb}^{(n+m) \times \text{dim}} \): Embedding for tokens in \( G \) after a standard Transformer encoder. - \( \text{LEGOHead}_i, i \in \{1, 2, 3, 4\} \): Network module for reconstructing perturbed local structures. The four values of \( i \) correspond to global orientation of center atoms, edge length of edges, azimuthal angles of leaf nodes, and polar angles of leaf nodes, respectively. - Labels: Ground truth labels of the geometric elements: \( z, l, \theta, \phi \). - \( T \): Training Steps 1: while \( T \neq 0 \) do 2: Pad \( M_{\text{center}}, M_{\text{edge}}, M_{\text{leaf}} \) to size \([n + m, 1]\) 3: \( z_{\text{pred}} = \text{LEGOHead}_1(\text{Emb}[M_{\text{center}}]) \) 4: \( l_{\text{pred}} = \text{LEGOHead}_2(\text{Emb}[M_{\text{edge}}]) \) 5: \( \theta_{\text{pred}} = \text{LEGOHead}_3(\text{Emb}[M_{\text{leaf}}]) \) 6: \( \psi_{\text{pred}} = \text{LEGOHead}_4(\text{Emb}[M_{\text{leaf}}]) \) 7: Loss = \( w_{\text{distance}} \cdot \text{MSELoss}(\text{Labels}, z_{\text{pred}}, l_{\text{pred}}) + w_{\text{angle}} \cdot \text{VonMisesLoss}(\text{Labels}, \theta_{\text{pred}}, \psi_{\text{pred}}) \) 8: Optimise(Loss) 9: \( T = T - 1 \) 10: end while 3.3 Pretrain via Localized Geometric Generation At a high level, our method first segments the 3D molecular structure into non-overlapping, one-hop local structures. We then perturb a proportion of these units through a corruption strategy that masks token attributes and adds noise to node coordinates simultaneously. Subsequently, we reconstruct the perturbed local structures in a generative way by predicting their global orientation and local geometric arrangements. Figure 2 visualizes the workflow of our method. Local Structure Segmentation The core idea of local structure segmentation is to ensure none of the segmented results should be overlapped, that is to say, a leaf node in one local structure cannot be the center node in another local structure, but the overlapping of two leaf nodes is allowed. To elaborate, we first traverse the graph nodes in a BFS order \( \pi \), collect the non-terminal nodes as \( V_{\text{non-terminal}} \), and initialize a boolean tensor \( f_{\text{segmented}} = 0^T \). Then, we sample a node \( u \) from \( V_{\text{non-terminal}} \) to form a local structure, where we add \( u \) to \( V_{\text{seg-center}} \) and set the flags of its one-hop neighbors to true \( f_{\text{segmented}}[v] = \text{True}, v \in N(u) \). We then repeat the above operation until all the atoms in \( V_{\text{non-terminal}} \) have been segmented. Though our segmentation algorithm possesses randomness and may leave out terminal atoms at times, we see it as a way to increase the generalizability and robustness. By sampling different central nodes during segmentation, the model is encouraged to learn more holistic representations rather than relying on a fixed decomposition across multiple pretraining iterations. Regarding terminal atoms that are initially excluded from segmented units, they are likely to be eventually incorporated through successive iterations that segment their tetrahedron-like neighborhoods. Local Structure Perturbation Given the segmented result of a molecular graph \( V_{\text{seg-center}} \), we randomly perturb some local structures with ratio \( m_{LS} \) and get the set of masked centers \( V_{\text{mask-center}} \) and an indicator tensor \( M_{\text{center}} = \{0, 1\}^n \). Since we mask all the nodes and edges in the selected local structures, the mask ratio over all tokens (atoms and edges) \( m_{\text{token}} \) will be different from \( m_{LS} \), which statistical relationship between the two mask ratio is in displayed in Appendix A. Based on the masked centers, we can denote the rest of the perturbed local structures as \( E_{\text{mask-edge}} = \{(u, v) | u \text{ or } v \in V_{\text{mask-center}}\} \), and \( V_{\text{mask-leaf}} = \{v | (u, v) \in E_{\text{mask-edge}} \text{ for } u \in V_{\text{mask-center}}\} \), along with \( M_{\text{edge}} \in \{0, 1\}^m \) and \( M_{\text{leaf}} \in \{0, 1\}^n \). Then, we conduct perturbation by adding coordinate noise to atoms in \( V_{\text{mask-center}} \) and \( V_{\text{mask-leaf}} \), as well as masking the edge attributes in \( E_{\text{mask-edge}} \). Local Structure Reconstruction To successfully reconstruct the perturbed local structures, we must consider two critical aspects: the global orientation of the local structure within the entire molecule and the internal arrangements between nodes and edges within a local structure. Table 1: Results for biochemistry property prediction tasks. We compare our models with existing 2D or 3D molecular pretraining models. The best and second best results are **bold** and _underlined_. | model | Classification (ROC-AUC ↑) | Regression (MAE ↓) | |----------------|----------------------------|--------------------| | | BACE | BBBP | Clintox | SIDER | Tox21 | Freesolv | Esol | Lipo | | AttrMask | 84.5 | 68.7 | 72.6 | 62.7 | 78.1 | 2.764 | 1.100 | 0.739 | | GROVER | 81.0 | 69.5 | 76.2 | 65.4 | 68.2 | 2.272 | 0.895 | 0.823 | | MolCLR | 82.4 | 72.2 | 91.2 | 58.9 | 75.0 | 2.594 | 1.271 | 0.691 | | 3DInfomax | 79.4 | 69.1 | 9.4 | 53.3 | 74.4 | 2.337 | 0.894 | 0.695 | | GraphMVP | 81.2 | 72.4 | 79.1 | 63.9 | 75.9 | - | 1.029 | 0.681 | | GEM | 85.6 | 72.2 | 90.1 | 67.2 | 80.6 | 1.877 | 0.798 | 0.660 | | Uni-Mol | 85.6 | 72.4 | 91.9 | 65.9 | 79.6 | 1.620 | 0.788 | 0.603 | | 3D PGT | 80.9 | 72.1 | 79.4 | 60.6 | 73.8 | - | 1.061 | 0.687 | | LEGO | 81.9 | 74.2 | 94.3 | 72.3 | 83.9 | 1.844 | 0.704 | 0.804 | Regarding spatial orientation, we predict the spherical coordinates of central atoms within masked local structures. These coordinates indicate where to position each unit within the overall molecule and its orientation relative to other units. For internal geometry, the previously predicted central atom serves as the origin of a spherical coordinate system (SCS). We then predict the radial distance ($r$, edge length), azimuthal angle ($\theta$), and polar angle ($\psi$) of each masked peripheral atom within this SCS. Edge lengths are directly predicted as they closely relate to bond type. Meanwhile, angular values guide subsequent reconstruction of three-dimensional coordinates for the peripheral atoms. The procedure of the local structure reconstruction of our method is summarized in Algorithm 1. We use Mean Squared Error as the loss function for edge length and radius, and adopt the von Mises-Fisher Loss to train angle-related terms. ## 4 EXPERIMENTS ### 4.1 DATASETS AND EXPERIMENTAL SETUP **Pre-training.** We pretrain LEGO on OGB-PCQM4Mv2 dataset [Hu et al., 2021], which contains 3D molecular structures simulated by density functional theory (DFT). The dataset has 3.38 million molecules, each with one dominant equilibrium conformation. While considering multiple conformations can describe 3D molecular structures more comprehensively and improve representability (Liu et al., 2021a; Stärk et al., 2022), we believe that learning molecular semantics from the dominant conformation is sufficient to validate our method. Handling multiple conformations is left for future work. We follow the Transformer encoder configuration from the original TokenGT base model: 12 layers, 768 embedding dimension, 32 attention heads and use Graph Laplacian as the node identifier. We mask $m_{LS}=10\%$ of the local structures and set the noise scale on coordinate noise to 0.3. The weights for distance loss $w_{distance}$ and angle loss $w_{angle}$ are both set to 1. We use AdamW optimizer with $(\beta_1, \beta_2) = (0.99, 0.999)$ and a weight decay of 0.1. We apply the polynomial learning rate scheduler, with a peak learning rate of $2e^{-4}$ and 150k warm-up steps over 1M iteration with a batch size 256. The model is pretrained on 8 NVIDIA A100s for 300 epochs. **Fine-tuning.** We use the $[\text{CLS}]$ token as the graph representation for downstream finetuning and pass it through a two-layer MLP projection head for task predictions. We evaluate the pretrained model on biochemical and quantum molecular properties. Biochemical properties test how well the model captures semantics from the segmented units within a molecule, while quantum properties test the model’s ability to represent 3D structures in terms of interatomic interactions. For biochemical properties, we choose the widely-used benchmark MoleculeNet [Wu et al., 2018], where the related tasks can be categorized into physical chemistry, biophysics, and physiology. The original MoleculeNet dataset contains only 2D data and existing 3D pretraining baselines take 2D graph as input as well. We follow this setting to demonstrate the transferability of our pretrained model. Table 2: Results on PCQM4Mv2 validation set in OGB Large-Scale Challenge [Hu et al., 2021]. The results are evaluated by Mean Absolute Error (MAE). The best and second best results are **bold**. | model | #param. | Valid MAE (↓) | |------------------------|---------|---------------| | GraphGPS<sub>BASE</sub> (Rampášek et al., 2022) | 6.2M | 0.0938 | | GRPE<sub>BASE</sub> (Park et al., 2022) | 46.2M | 0.0890 | | EGT (Hussain et al., 2022) | 89.3M | 0.0869 | | GRPE<sub>LARGE</sub> (Park et al., 2022) | 46.2M | 0.0867 | | Graphormer (Ying et al., 2021) | 47.1M | 0.0864 | | GraphGPS<sub>BASE</sub> (Rampášek et al., 2022) | 19.4M | 0.0858 | | GraphGPS<sub>NEED</sub> (Rampášek et al., 2022) | 13.8M | 0.0852 | | GEM-2 (Liu et al., 2022a) | 32.1M | 0.0793 | | Transformer-M (Luo et al., 2022) | 47.1M | 0.0787 | | GPS++<sub>BASE</sub> (Masters et al., 2022) | 44.3M | 0.0778 | | 3D GPT (Wang et al., 2023) | 42.6M | **0.0762** | | TokenGT (Kim et al., 2022) | 48.5M | 0.0910 | | LEGO (ours) | 52.7M | 0.0817 | Following previous works [Zhu et al., 2022; Fang et al., 2022], the datasets are split according to their molecular scaffolds by 8:1:1. We use Bayesian search to find the best hyper-parameter combination with a maximum trials of 64. For quantum properties, we choose the OGBLSC-PCQM4Mv2 [Hu et al., 2021] as the benchmark. Given 3D molecular structures, the task requires the model to predict the HOMO-LUMO gap of the molecules, an important quantum property that has been shown to closely correlate with macro molecular properties. Since the test set is not open-sourced, we report the validation MAE as the result as most methods do. **Baselines.** For MoleculeNet, we mainly compare LEGO with existing state-of-the-art 3D-based pretrained models in [Stark et al., 2022; Liu et al., 2021a; Fang et al., 2022; Zhu et al., 2022]. We also select three typical pretraining models on 2D graphs in order to illustrate the effectiveness of leveraging 3D geometry information: AttrMask [Hu et al., 2019], GROVER [Rong et al., 2020], and GraphCLR [You et al., 2020]. In terms of quantum property prediction, our baselines cover the currently SOTA methods, including GraphGPS [Rampášek et al., 2022], GRPE [Park et al., 2022], EGT [Hussain et al., 2022], Graphormer [Ying et al., 2021], Transformer-M [Luo et al., 2022], GPS++ [Masters et al., 2022] and 3D-GPT [Wang et al., 2023]. ### 4.2 Main Experimental Results In this section, we evaluate our pretrained model on the two property prediction tasks and analyze what improvement the model can obtain via our structured pretraining. For biochemical properties, we achieve state-of-the-art results on 5 out of 8 tasks and comparable performance on 2 additional tasks (Table 1). Specifically, LEGO demonstrates significantly improved performance on predicting physiological properties like toxicity, indicating that our method can effectively capture functional semantics in molecular structures. LEGO also achieves strong results on tasks such as Freesolv and Esol, which are related to the properties of molecules in a water environment. However, it underperforms on Lipo, which is related to a lipid environment. This difference in transfer learning may be due to the significant difference between the conformations molecules exhibit in a lipid environment and the equilibrium conformations used in our pretraining. Again, these results validate our motivation that exploiting functional semantics through proper segmentation of molecular structures is vital. Table 2 exhibits the validation results on PCQM4M-v2 for quantum property prediction. As shown in the table, although LEGO boosts the performance with 10.2% over the non-pretrained TokenGT, it lags behind the state-of-the-art result. However, we would like to argue this is because all the other baselines are introducing complicated graph-specific encodings into the model, while we utilize a... pure transformer backbone. The primary contribution of this work is to give a glimpse at how proper selection of semantic units impacts 3D molecular pretraining, and we believe a further introduction of graph inductive bias will further improve our result. 4.3 Ablation Studies In this section, we ablate key design elements of the proposed LEGO pretraining paradigm. Mask Ratio and Noise Scale In Zaidi et al. (2022) and Feng et al. (2023), the authors point out that in molecular denoising pretraining, excessive noise often leads to training divergence and detrimental impacts. Will this conclusion still hold on our structured pretraining? The ablation results in Table 3 give a positive answer. From the table, we observe decreased performance on PCQM4M-v2 as the mask ratio and noise scale parameters for local structure (LS) perturbation are increased. We attribute this trend to greater difficulty in reconstructing the original data when more extensive corruption is introduced across larger molecular fractions during pre-training. Specifically, higher mask ratios lead to a greater number of perturbed local structures, while larger noise scales further distort the original topology of the units. With excessive corruption, preserving original structural semantics for reconstruction becomes more challenging, limiting gains from the pre-training phase for downstream transfer. Random vs Structured To ablate the effect of our structured design in pretraining, we adopt a random masking on atoms with \( m_{\text{atom}} = 0.36 \), which corresponds to its structured counterpart \( m_{\text{LS}} = 0.1 \). Table 4 demonstrate that naive atomic-level noise leads to inferior performance compared to LEGO’s incorporation of structural semantics during perturbation and reconstruction, quantifying the consequent gains of a chemistry-aware, structure-based procedure for molecular representation enhancement through self-supervised objectives. 5 Conclusion In this paper, we propose a novel approach for self-supervised learning on 3D molecular structures. By treating tetrahedrons within 3D molecular structures as fundamental building blocks, we implement structured denoising to capture both local and global features. We also address the atom-bond inconsistency problem by explicitly modeling edges in molecular graph. Through pretraining, our approach achieves competitive results on both biochemical and quantum molecule property prediction tasks. In the future, we aim to investigate integrating additional graph inductive biases into the model while retaining explicit edge representations. Furthermore, we plan to validate the proposed segmentation strategy across a broader range of molecular structures and explore alternate perturbation techniques. ### Table 3: Ablation results on PCQM4M-v2 for different \( m_{\text{LS}} \) and noise scales. | \( m_{\text{LS}} \) | noise scale | equivalent \( m_{\text{atom}} \) | Valid MAE | |-------------------|-------------|-------------------------------|-----------| | 0.1 | 0.3 | 0.36 | **0.0817**| | 0.1 | 1.0 | 0.36 | 0.0862 | | 0.15 | 0.3 | 0.57 | 0.0877 | | 0.2 | 0.3 | 0.77 | 0.0885 | ### Table 4: Comparison for random and structured pretraining on PCQM4M-v2. | Model | Valid MAE | |------------------------|-----------| | LEGO | **0.0817**| | randomly perturbed | 0.0883 | REFERENCES Simon Batzner, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P Mailoa, Mordechai Kornbluth, Nicola Molinari, Tess E Smidt, and Boris Kozinsky. E (3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. *Nature communications*, 13(1):2453, 2022. Stefan Chmiela, Valentin Vassilev-Galindo, Oliver T Unke, Adil Kabylda, Huziel E Sauceda, Alexandre Tkatchenko, and Klaus-Robert Müller. Accurate global machine learning force fields for molecules with hundreds of atoms. *Science Advances*, 9(2):eadf0873, 2023. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, pp. 4171–4186, 2019. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Vijay Prakash Dwivedi and Xavier Bresson. A generalization of transformer networks to graphs. *arXiv preprint arXiv:2012.09699*, 2020. Xiaomin Fang, Lihang Liu, Jieqiong Lei, Donglong He, Shanzhuo Zhang, Jingbo Zhou, Fan Wang, Hua Wu, and Haifeng Wang. Chemrl-gem: Geometry enhanced molecular representation learning for property prediction. *arXiv preprint arXiv:2106.06130*, 2021. Xiaomin Fang, Lihang Liu, Jieqiong Lei, Donglong He, Shanzhuo Zhang, Jingbo Zhou, Fan Wang, Hua Wu, and Haifeng Wang. Geometry-enhanced molecular representation learning for property prediction. *Nature Machine Intelligence*, 4(2):127–134, 2022. Shikun Feng, Yuyan Ni, Yanyan Lan, Zhi-Ming Ma, and Wei-Ying Ma. Fractional denoising for 3d molecular pre-training. In *International Conference on Machine Learning*, pp. 9938–9961. PMLR, 2023. Fabian Fuchs, Daniel Worrall, Volker Fischer, and Max Welling. Se (3)-transformers: 3d roto-translation equivariant attention networks. *Advances in Neural Information Processing Systems*, 33:1970–1981, 2020. Jiaqi Han, Yu Rong, Tingyang Xu, and Wenbing Huang. Geometrically equivariant graph neural networks: A survey. *arXiv preprint arXiv:2202.07230*, 2022. Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. Strategies for pre-training graph neural networks. *arXiv preprint arXiv:1905.12265*, 2019. Weihua Hu, Matthias Fey, Hongyu Ren, Maho Nakata, Yuxiao Dong, and Jure Leskovec. Ogb-lsc: A large-scale challenge for machine learning on graphs. *arXiv preprint arXiv:2103.09430*, 2021. Md Shamim Hussain, Mohammed J Zaki, and Dharmashankar Subramanian. Global self-attention as a replacement for graph convolution. In *Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining*, pp. 655–665, 2022. Rui Jiao, Jiaqi Han, Wenbing Huang, Yu Rong, and Yang Liu. Energy-motivated equivariant pre-training for 3d molecular graphs. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 37, pp. 8096–8104, 2023. Jinwoo Kim, Tien Dat Nguyen, Seonwoo Min, Sungjun Cho, Moontae Lee, Honglak Lee, and Seunghoon Hong. Pure transformers are powerful graph learners. *arXiv preprint arXiv:2207.02505*, 2022. Johannes Klippera, Janek Groß, and Stephan Günnemann. Directional message passing for molecular graphs. *arXiv preprint arXiv:2003.03123*, 2020.
s6bKLlF4Pe
I am doubtful about the significance of convergence results. The convergence result with GPI follows the same rate as the convergence rate without GPI. It is hard to tell directly what is the difference in the constants. Having a thorough discussion with some examples would serve to give readers a better understanding of the upper bound.
Proviable Knowledge Transfer using Successor Feature for Deep Reinforcement Learning Anonymous authors Paper under double-blind review Abstract This paper studies the transfer reinforcement learning (RL) problem where multiple RL problems have different reward functions but share the same underlying transition dynamics. In this setting, the Q-function of each RL problem (a.k.a. a task) can be decomposed into a successor feature (SF) and a reward mapping: the former characterizes the transition dynamics, and the latter characterizes the task-specific reward function. This Q-function decomposition, coupled with a policy improvement operator known as generalized policy improvement (GPI), reduces the search space of finding the optimal Q-function, and the SF & GPI framework exhibits promising empirical performance compared to traditional RL methods like Q-learning. However, its theoretical foundations remain largely unestablished, especially when learning successor features using deep neural networks (SFs-DQN). This paper studies the provable knowledge transfer using SFs-DQN in transfer RL problems. We establish the first convergence analysis with provable generalization guarantees for SF-DQN with GPI. The theory reveals that SF-DQN with GPI outperforms conventional RL approaches, such as deep Q-network, in terms of both faster convergence rate and better generalization. Numerical experiments on real and synthetic RL tasks support the superior performance of SF-DQN & GPI, quantitatively aligning with our theoretical findings. 1 Introduction In reinforcement learning (RL), the goal is to train an agent to perform a task within an environment in a desirable manner by allowing the agent to interact with the environment. Here, the agent is guided towards the desirable behavior by the rewards, and the optimal policy is derived from a learned value function (Q-function) in selecting the best actions to maximize the immediate and future rewards. This framework can effectively capture a wide array of real-world applications, such as gaming (Mnih et al., 2013; Silver et al., 2017), robotics (Kalashnikov et al., 2018), autonomous vehicles (Shalev-Shwartz et al., 2016; Schwarting et al., 2018), healthcare (Coronato et al., 2020), and natural language processing (Tenney et al., 2018). However, RL agents require a significant amount of interactions with the environment to tackle complex tasks, especially when RL is equipped with deep neural networks (DNNs). For example, AlphaGo (Silver et al., 2017) required 29 million matches and 5000 TPUs at a cost exceeding $35 million, which is time-consuming and memory-intensive. Nevertheless, many complex real-world problems can naturally decompose into multiple interrelated sub-problems, all sharing the same environmental dynamics (Sutton et al., 1999; Bacon et al., 2017; Kulkarni et al., 2016a). In such scenarios, it becomes highly advantageous for an agent to harness knowledge acquired from previous tasks to enhance its performance in tackling new but related challenges. This practice of leveraging knowledge from one task to improve performance in others is known as transfer learning (Lazaric, 2012; Taylor & Stone, 2009; Barreto et al., 2017). This paper focuses on an RL setting with learning multiple tasks, where each task is associated with a different reward function but shares the same environment. This setting naturally arises in many real-world applications such as robotics (Yu et al., 2020). We consider exploring the knowledge transfer among multiple tasks via the successor feature (SF) framework (Barreto et al., 2017) which disentangles the environment dynamic from the reward function at an incremental computational cost. The SF framework is derived from successor representation (SR) (Dayan, 1993) by introducing the value function approximation. Specifically, SR (Dayan, 1993) decouples the value function into a future state occupancy measure and a reward mapping. Here, the future state occupancy... characterizes the transition dynamics of the environment, and the reward mapping characterizes the reward function of the task. SF is a natural application of SR in solving value function approximation. Furthermore, Barreto et al. (2017) propose a generalization of the classic policy improvement, termed generalized policy improvement (GPI), enabling smooth knowledge transfer across learned policies. In contrast to traditional policy improvement, which typically considers only a single policy, Generalized Policy Improvement (GPI) operates by maintaining a set of policies, each associated with a distinct skill the agent has acquired. This approach enables the agent to switch among these policies based on the current state or task requirements, providing a flexible and adaptive framework for decision-making. Empirical findings presented in (Barreto et al., 2017) highlight the superior transfer performance of SF & GPI in deep RL when compared to conventional methods like Deep Q-Networks (DQNs). Subsequent works further justified the improved performance of SF in subgoal identification (Kulkarni et al., 2016b) and real-world robot navigation (Zhang et al., 2017). While performance guarantees of SF-based learning are provided in the simple tabular setting (Barreto et al., 2017; 2018), less is known for such approaches in the widely used function approximation setting. In this context, this paper aims to close this gap by providing theoretical guarantees for SF learning in the context of DNNs. Our objective is to explore the convergence and generalization analysis of SF when paired with DNN approximation. We also seek to delineate the conditions under which SF learning can offer more effective knowledge transfer among tasks when contrasted with classical deep reinforcement learning (DRL) approaches, e.g., DQN (Mnih et al., 2013). Contributions. This paper presents the first convergence analysis with generalization guarantees for successor feature learning with deep neural network approximation (SF-DQN). This paper focuses on estimating the optimal Q-value function through the successor feature decomposition, where the successor feature decomposition component is approximated through a deep neural network. The paper offers a comprehensive analysis of the convergence of deep Q-networks with successor feature decomposition and provides insights into the improved performance of the learned Q-value function derived from successor feature decomposition. The key contributions of this study are as follows: C1. The convergence analysis of the proposed SF-DQN to the optimal Q-function with generalization guarantees. By decomposing the reward into a linear combination of the transition feature and reward mapping, we demonstrate that the optimal Q-function can be learned by alternately updating the reward mapping and the successor feature using the collected data in online RL. This learned Q-function converges to the optimal Q-function with generalization guarantees at a rate of $1/T$, where $T$ is the number of iterations in updating transition features and reward mappings. C2. The theoretical characterization of enhanced performance by leveraging knowledge from previous tasks through GPI. This paper characterizes the convergence rate with generalization guarantees in transfer RL utilizing GPI. The convergence rate accelerates following the degree of correlation between the source and target tasks. C3. The theoretical characterization of the superior transfer learning performance with SF-DQN over non-representation learning approach DQNs. This paper quantifies the transfer learning ability of SF-DQN and DQN algorithms by evaluating their generalization error when transferring knowledge from one task to another. Our results indicate that SF-DQN achieves improved generalization compared to DQN, demonstrating the superiority of SF-DQN in transfer RL. 1.1 RELATED WORKS Successor features in RL. In the pioneering work, (Dayan, 1993) introduced the concept of SR, demonstrating that the value function can be decomposed into a reward mapping and a state representation that measures the future state occupancy from a given state, with learning feasibility proof in tabular settings. Subsequently, (Barreto et al., 2017) extended SR from three perspectives: (1) the feature domain of SR is extended from states to state-action pairs, known as SF; (2) DNNs are deployed as function approximators to represent the SF and reward mappings; (3) GPI algorithm is introduced to accelerate policy transfer for multi-tasks. (Barreto et al., 2017; 2018) provided transfer guarantees for Q-learning with SF and GPI in the tabular setting. Furthermore, (Kulkarni et al., 2016b; Zhang et al., 2017) apply SF learning with DNN-based schemes to subgoal identification (Kulkarni et al., 2016b) and robot navigation (Zhang et al., 2017). A comprehensive RL transfer comparison using SF under different assumptions can be found in (Zhu et al., 2023). RL with neural networks. Recent advancements in RL with neural network approximation mainly include the Bellman Eluder dimension (Jiang et al., 2017; Russo & Van Roy, 2013), Neural Tangent Kernel (NTK) (Yang et al., 2020; Cai et al., 2019; Xu & Gu, 2020; Du et al., 2020), and Besov regularity (Suzuki, 2019; Ji et al., 2022; Nguyen-Tang et al., 2022). However, each of these frameworks has its own limitations. The Eluder dimension exhibits exponential growth even for shallow neural networks (Dong et al., 2021), making it challenging to characterize sample complexity in real-world applications of DRL. The NTK framework linearizes DNNs to bypass the non-convexity derived from the non-linear activation function in neural networks. Nevertheless, it requires using computationally inefficient, extremely wide neural networks (Yang et al., 2020). Moreover, the NTK approach falls short in explaining the advantages of utilizing non-linear neural networks over linear function approximation (Liu et al., 2022; Fan et al., 2020). The Besov space framework (Ji et al., 2022; Nguyen-Tang et al., 2022; Liu et al., 2022; Fan et al., 2020) requires sparsity on neural networks and makes the impractical assumption that the algorithm can effectively identify the global optimum, which is unfeasible for non-convex objective functions involving neural networks. **Theory of generalization in deep learning.** The theory of generalization in deep learning has been extensively developed in supervised learning, where labeled data is available throughout training. Generalization in learned models necessitates low training error and small generalization gap. However, in DNNs, training errors and generalization gaps are analyzed separately due to their non-convex nature. To ensure bounded generalization, it is common to focus on one-hidden-layer neural networks (Safran & Shamir, 2018) in convergence analysis. Existing theoretical analysis tools in supervised learning with generalization guarantees draw heavily from various frameworks, including the Neural Tangent Kernel (NTK) framework (Jacot et al., 2018; Du et al., 2018; Lee et al., 2018), model recovery techniques (Zhong et al., 2017; Ge et al., 2018; Bakshi et al., 2019; Soltanolkotabi et al., 2018; Zhang et al., 2020), and the analysis of structured data (Li & Liang, 2018; Shi et al., 2022; Brutzkus & Globerson, 2021; Allen-Zhu & Li, 2022; Karp et al., 2021; Wen & Li, 2021). ## 2 Preliminaries In this paper, we address the learning problem involving multiple tasks \( \{T_i\}_{i=1}^n \) and aim to find the optimal policy \( \pi_i^* \) for each task \( T_i \). We begin by presenting the preliminaries for a single task and then elaborate on our algorithm for learning with multiple tasks in the following section. ### Markov decision process and Q-learning. The Markov decision process (MDP) is defined as a tuple \((S, A, P, r, \gamma)\), where \( S \) is the state space and \( A \) is the set of possible actions. The transition operator \( P : S \times A \rightarrow \Delta(S) \) gives the probability of transitioning from the current state \( s \) and action \( a \) to the next state \( s' \). The function \( r : S \times A \times S \rightarrow [-R_{\text{max}}, R_{\text{max}}] \) measures the reward for a given state-action pair. The discount factor \( \gamma \in [0, 1) \) determines the significance of future rewards. For the \( i \)-th task, the goal of the agent is to find the optimal policy \( \pi_i^* \) with \( a_t = \pi_i^*(s_t) \) at each time step \( t \). The aim is to maximize the expected discounted sum of reward as \( \sum_{t=0}^{\infty} \gamma^t r_i(s_t, a_t, s_{t+1}) \), where \( r_i \) denotes the reward function for the \( i \)-th task. For any state-action pair \((s, a)\), we define the action-value function \( Q_i^\pi \) given a policy \( \pi \) as \[ Q_i^\pi(s, a) = \mathbb{E}_{\pi, P} \left[ \sum_{t=0}^{\infty} \gamma^t r_i(s_t, a_t, s_{t+1}) \mid s_0 = s, a_0 = a \right]. \] Then, the optimal \( Q \)-function, denoted as \( Q_i^* \) or \( Q_i^{*\pi} \), satisfies \[ Q_i^*(s, a) := \max_\pi Q_i^\pi(s, a) = \mathbb{E}_{s', a, r_i} r_i(s, a, s') + \gamma \max_{a'} Q_i^{*\pi}(s', a'), \] where (2) is also known as the Bellman equation. Through the optimal action-value function \( Q_i^* \), the agent can derive the optimal policy (Watkins & Dayan, 1992; Sutton & Barto, 2018) following \[ \pi_i^*(s) = \arg\max_a Q_i^*(s, a). \] ### Deep Q-networks (DQNs). The DQN utilizes a DNN parameterized with weights \( \omega \), denoted as \( Q_i(s, a; \omega) : \mathbb{R}^d \rightarrow \mathbb{R} \) for the \( i \)-th task, to approximate the optimal Q-value function \( Q_i^* \) in (2). Specifically, given input feature \( x := x(s, a) \), the output of the \( L \)-hidden-layer DNN is defined as \[ Q_i(s, a; \omega) := \omega_{L+1}^\top / K \cdot \sigma(\omega_L^\top \cdots \sigma(\omega_1^\top x)), \] where \( x = x(s, a) \) and \( \sigma(\cdot) \) is the ReLU activation function, i.e., \( \sigma(z) = \max\{0, z\} \). ### Successor feature. For the \( i \)-th task, suppose the expected one-step reward associated with the transition \((s, a, s')\) can be computed as \[ r_i(s, a, s') = \phi(s, a, s')^\top w_i^*, \quad \text{with} \quad \phi, w_i^* \in \mathbb{R}^d, \] where $\phi$ remains the same for all the task. With the reward function in (5), the Q-value function in (1) can be rewritten as $$Q^\pi(s, a) = \mathbb{E}_{\pi, P} \left[ \sum_{t=0}^{\infty} \gamma^t \phi(s_t, a_t, s_{t+1}) \mid (s_0, a_0) \right]^\top w_i^\ast := \psi^\pi_i(s, a)^\top w_i^\ast.$$ (6) Then, the optimal Q function satisfies $$Q^\ast_i(s, a) = \mathbb{E}_{\pi^\ast, P} \left[ \sum_{t=0}^{\infty} \gamma^t \phi(s_t, a_t, s_{t+1}) \mid (s_0, a_0) \right]^\top w_i^\ast := \psi^\ast_i(s, a)^\top w_i^\ast.$$ (7) 3 PROBLEM FORMULATION AND ALGORITHM Problem formulation. Without loss of generality, the data is assumed to be collected from the tasks in the order of $\mathcal{T}_1$ to $\mathcal{T}_n$ during the learning process. The goal is to utilize the collected data for each task, e.g., $\mathcal{T}_j$, and the learned knowledge from previous tasks $\{\mathcal{T}_i\}_{i=1}^{j-1}$ to derive the optimal policy $\pi^\ast_j$ for $\mathcal{T}_j$. These tasks share the same environment dynamic but the reward function changes across the task as shown in (5). For each task $\mathcal{T}_i$, we denote its reward as $$r_i = \phi \cdot w_i^\ast,$$ with $\|\phi\|_2 \leq \phi_{\text{max}},$ (8) where $\phi$ is the transition feature across all the tasks and $w_i^\ast$ is the reward mapping. From (7), the learning of optimal Q-function for the $i$-th task is decomposed as two sub-tasks: learning SF $\psi^\ast_i(s, a)$ and learning reward $w_i^\ast$. Reward mapping. To find the optimal $w_i^\ast$, we utilize the information from $\phi(s, a, s')$ and $r_i(s, a, s')$. The value of $w_i^\ast$ can be obtained by solving the optimization problem $$\min_{w_i} \| r_i - \phi \cdot w_i \|_2.$$ (9) Successor features. We use $\psi^\pi_i$ to denote the successor feature for the $i$-th task, and $\psi^\pi_i$ satisfies $$\psi^\pi_i(s, a) = \mathbb{E}_{s' \mid s, a} \phi(s, a, s') + \gamma \cdot \psi^\pi_i(s', \pi(s')).$$ (10) The expression given by (10) aligns perfectly with the Bellman equation in (2), where $\phi$ acts as the reward. Therefore, following DQNs, we utilize a function $\psi(s, a)$ parameterized using the DNN as $$\psi_i(\Theta_i; s, a) = H(\Theta_i; x(s, a)),$$ (11) where $x : S \times A \rightarrow \mathbb{R}^d$ is the feature mapping of the state-action pair. Without loss of generality, we assume $|x(s, a)| \leq 1$. Then, find $\psi^\ast$ is to minimize the mean squared Bellman error (MSBE) $$\min_{\Theta_i} f(\Theta_i) := \mathbb{E}_{(s, a) \sim \pi^\ast} \left[ \mathbb{E}_{s' \mid s, a} \psi_i(\Theta_i; s, a) - \phi(s, a, s') - \gamma \cdot \psi_i(\Theta_i; s', \pi^\ast(s')) \right]^2.$$ (12) It is worth mentioning that although (12) and (9) appear to be independent of each other, the update of $w_i$ does affect the update of $\psi_i$ through the shift in data distribution. The collected data is estimated based on the policy depending on the current estimated values of $\psi_i$ and $w_i$, which shifts the distribution of the collected data away from $\pi^\ast_i$. This, in turn, leads to a bias depending on the value of $w_i$ in the calculation of the gradient of $\Theta_i$ in minimizing (12). Generalized policy improvement (GPI). Suppose we have acquired knowledge about the optimal successor features for the previous $n$ tasks, and we use $\hat{\psi}_i$ to denote the estimated successor feature function for the $i$-th task. Now, let’s consider a new task $\mathcal{T}_{n+1}$ with the reward function defined as $r_{n+1} = \phi w_{n+1}^\ast$. Instead of training from scratch, we can leverage the knowledge acquired from previous tasks to improve our approach. We achieve this by deriving the policy follows $$\pi(a | s) = \arg\max_a \max_{1 \leq i \leq n+1} \hat{\psi}_i(s, a)^\top w_{n+1}^\ast.$$ (13) This strategy tends to yield better performance than relying solely on $\hat{\psi}_{n+1}(s, a)^\top w_{n+1}^\ast$, especially when $\hat{\psi}_{n+1}$ has not yet converged to the optimal successor feature $\psi^\ast_{n+1}$ during the early learning stage, while some task is closely related to the new tasks, i.e., some $w_i^\ast$ is close to $w_{n+1}^\ast$. This policy improvement operator is derived from Bellman’s policy improvement theorem (Bertsekas & Tsitsiklis, 1996) and (2). When the reward is fixed across different policies, e.g., $\{\pi_i\}_{i=1}^n$, and given that the optimal Q-function represents the maximum across the entire policy space, the maximum of multiple Q-functions corresponding to different policies, $\max_{1 \leq i \leq n} Q^{\pi_i}$, is expected to be closer to $Q^\ast$ than any individual Q-function, $Q^{\pi_i}$. In this paper, the parameter $\phi$ in learning the successor feature is analogous to the reward in learning the Q-function. As $\phi$ remains the same for different tasks, this analogy has inspired the utilization of GPI in our setting, even where the rewards change. 3.1 Successor Feature Deep Q-Network The goal is to find \( w_i \) and \( \Theta_i \) by solving the optimization problems in (9) and (12) for each task sequentially, and the optimization problems are solved by mini-batch stochastic gradient descent (mini-batch SGD). Algorithm 1 contains two loops, and the outer loop number \( n \) is the number of tasks and inner loop number \( T \) is the maximum number of iterations in solving (9) and (12) for each task. At the beginning, we initialize the parameters as \( \Theta^{(0)}_i \) and \( w^{(0)}_i \) for task \( i \) with \( 1 \leq i \leq n \). In \( t \)-th inner loop for the \( i \)-th task, let \( s_t \) be the current state, and \( \theta_c \) be the learned weights for task \( c \). The agent selects and executes actions according to \[ a = \pi_\beta(\max_{c \in [i]} \psi(\Theta_c; s_t, a)^\top w^{(t)}_i), \] where \( \pi_\beta(Q(s_t, a)) \) is the policy operator based on the function \( Q(s_t, a) \), e.g., greedy, \( \varepsilon \)-greedy, and softmax. For example, if \( \pi_\beta(\cdot) \) stands for greedy policy, then \( a = \arg \max_a \max_{c \in [i]} \psi(\Theta_c; s_t, a)^\top w^{(t)}_i \). The collected data are stored in a replay buffer with size \( N \). Then, we sample a mini-batch of samples from the replay buffer and denote the samples as \( D_t \). **Algorithm 1** Successor Feature Deep Q-Network (SF-DQN) 1: **Input:** Number of iterations \( T \), and experience replay buffer size \( N \), step size \( \{\eta_t, \kappa_t\}_{t=1}^T \). 2: Initialize \( \{\Theta^{(0)}_i\}_{i=1}^n \) and \( \{w^{(0)}_i\}_{i=1}^n \). 3: **for** Task \( i = 1, 2, \ldots, n \) **do** 4: **for** \( t = 0, 1, 2, \ldots, T - 1 \) **do** 5: Collect data and store in the experience replay buffer \( D_t \) following a behavior policy \( \pi_t \) in (14). 6: Perform gradient descent steps on \( \Theta^{(t)}_i \) and \( w^{(t)}_i \) following (15). 7: **end for** 8: Return \( Q_i = \psi_i(\Theta^{(T)}_i)^\top w^{(T)}_i \) for \( i = 1, 2, \ldots, n \). 9: **end for** Next, we update the current weights using a mini-batch gradient descent algorithm following \[ w^{(t+1)} = w^{(t)} - \kappa_t \cdot \sum_{m \in D_t} \left( \phi(s_m, a_m, s'_m)^\top w^{(t)} - r(s_m, a_m, s'_m) \right) \cdot \phi(s_m, a_m, s'_m) \] \[ \Theta^{(t+1)}_i = \Theta^{(t)}_i - \eta_t \cdot \sum_{m \in D_t} \left( \psi(\Theta^{(t)}_i; s_m, a_m) - \phi(s_m, a_m, s'_m) - \gamma \cdot \psi(\Theta^{(t)}_i; s'_m, a') \right) \] \[ \cdot \nabla_{\Theta_i} \psi(\Theta^{(t)}_i; s_m, a_m), \] where \( \eta_t \) and \( \kappa_t \) are the step sizes, and \( a' = \arg \max_a \max_{c \in [i]} \psi(\Theta_c; s'_m, a)^\top w^{(t)}_i \). The gradient for \( \Theta^{(t)}_i \) in (15) can be viewed as the gradient of \[ \sum_{(s_m, a_m) \sim D_t} (\psi_i(\Theta_i; s, a) - \phi - \mathbb{E}_{s'|s,a} \max_{a'} \psi_i(\Theta^{(t)}_i; s', a'))^2, \] which is the approximation to (12) via replacing \( \max_{a'} \psi_i^* \) with \( \max_{a'} \psi_i(\Theta^{(t)}_i) \). 4 THEORETICAL RESULTS 4.1 Summary of Major Theoretical Findings To the best of our knowledge, our results in Section 4.3 provide the first theoretical characterization for SF-DQN with GPI, including a comparison with the conventional Q-learning under commonly used assumptions. Before formally presenting them, we summarize the highlights as follows. | Notation | Description | |----------|-------------| | \( K \) | Number of neurons in the hidden layer. | | \( L \) | Number of the hidden layers. | | \( d \) | Dimension of the feature mapping of \((s, a)\). | | \( T \) | Number of iterations. | | \( \Theta^*_i, w^*_i \) | The global optimal to (12) and (9) for \( i \)-th task. | | \( N \) | Replay buffer size. | | \( p_1 \) | The smallest eigenvalue of \( \mathbb{E}[\psi_i(\Theta^*_i)^\top \nabla \psi_i(\Theta^*_i)] \). | | \( p_2 \) | The smallest eigenvalue of \( \mathbb{E}[\phi(s, a) \phi(s, a)^\top] \). | | \( q \) | A variable indicates the relevance between current and previous tasks. | | \( C^* \) | A constant related to the distribution shift between the behavior and optimal policies. | **(T1)** Leaned Q-function converges to the optimal Q-function at a rate of \( 1/T \) with generalization guarantees. We demonstrate that the learned parameters \( \Theta^{(T)}_i \) and \( w^{(T)}_i \) converge towards their respective ground truths, \( \Theta^*_i \) and \( w^*_i \), indicating that SF-DQN converges to optimal Q-function at a rate of \( 1/T \) as depicted in (23) (Theorem 1). Moreover, the generalization error of the learned Q-function scales on the order of $\frac{\|w^{(0)} - w^*\|_2}{1-\gamma - \Omega(N^{-1/2}) - \Omega(C^*)} \cdot \frac{1}{T}$. By employing a large replay buffer $N$, minimizing the data distribution shift factor $C^*$, and improving the estimation of task-specific reward weights $w^{(0)}$, we can achieve a lower generalization error. (T2) GPI enhances the generalization of the learned model with respect to the task relevance factor $q^*$. We demonstrate that, when GPI is employed, the learned parameters exhibit improved estimation error with a reduction rate at $\frac{1-c}{1-cq^*}$ for some constant $c < 1$ (Theorem 2), where $q^*$ is defined in (24). From (24), it is clear that $q^*$ decreases as the distances between task-specific reward weights, denoted as $\|w^*_i - w^*_j\|_2$, become smaller. This indicates a close relationship between the previous tasks and the current task, resulting in a smaller $q^*$ and, consequently, a larger improvement through the usage of GPI. (T3) SF-DQN achieves a superior performance over conventional DQN by a factor of $\frac{1+\gamma}{2}$ for the estimation error of the optimal Q-function. When we directly transfer the learned knowledge of the Q-function to a new task without any additional training, our results demonstrate that SF-DQN always outperforms its conventional counterpart, DQN, by a factor of $\frac{1+\gamma}{2}$ (Theorems 3 and 4). As $\gamma$ approaches one, we raise the emphasis on long-term rewards, making the accumulated error derived from the incorrect Q-function more significant. Consequently, this leads to reduced transferability between the source tasks and the target task. Conversely, when $\gamma$ is small, indicating substantial potential for transfer learning between the source and target tasks, we observe a more significant improvement when using SF-DQN. 4.2 Assumptions We propose the assumptions in deriving our major theoretical results. These assumptions are commonly used in existing RL and neural network learning theories to simplify the presentation. Assumption 1. There exists a deep neural network with weights $\Theta^*_i$ such that it minimizes (12) for the $i$-th task, i.e., $f(\Theta^*_i) = 0$. Assumption 1 assumes a substantial expressive power of the deep neural network, allowing it to effectively represent $\psi^*$ in the presence of an unknown ground truth $\Theta^*$. Assumption 2. At any fixed outer iteration $t$, the behavior policy $\pi_t$ and its corresponding transition kernel $P_t$ satisfy $$\sup_{s \in S} d_{TV}\left(\mathbb{P}(s_{\tau} \in \cdot) \mid s_0 = s, P_t\right) \leq \lambda \nu^\tau, \quad \forall \tau \geq 0$$ for some constant $\lambda > 0$ and $\nu \in (0, 1)$, where $d_{TV}$ denotes the total-variation distance. Assumption 2 assumes the Markov chain $\{s_n, a_n, s_{n+1}\}$ induced by the behavior policy is uniformly ergodic with the corresponding invariant measure $P_t$. This assumption is standard in Q-learning (Xu & Gu, 2020; Zou et al., 2019; Bhandari et al., 2018), where the data are non-i.i.d. Assumption 3. For any $\Theta^{(t,0)} \in \mathbb{R}^n$ and $w^{(t,0)} \in \mathbb{R}^d$, the greedy policy $\pi_t$ at the $t$-th outer loop, i.e., $\pi_t(a|s) = \arg\max_{a'} Q_t(s, a')$, satisfies $$|\pi_t(a|s) - \pi^*(a|s)| \leq C \cdot \sup_{(s,a)} \|Q_t(s,a) - Q^*(s,a)\|_F,$$ where $C$ is a positive constant. Equivalently, when $Q_t = \psi(\Theta^{(t)})^\top w^{(t)}$, we have $$|\pi_t(a|s) - \pi^*(a|s)| \leq C \cdot (\|\Theta^{(t)} - \Theta^*\|_2 + \|w^{(t)} - w^*\|_2).$$ Assumption 3 indicates the policy difference between the behavior policy and the optimal policy. Moreover, (19) can be considered as a more relaxed variant of condition (2) in Zou et al. (2019) as (19) only necessitates the constant to hold for the distance of an arbitrary function from the ground truth, rather than the distance between two arbitrary functions. 4.3 Main Theoretical Findings 4.3.1 Convergence analysis of SF-DQN Theorem 1 demonstrates that the learned Q function converges to the optimal Q function when using SF-DQN for Task 1. Notably, GPI is not employed for the initial task, as we lack prior knowledge about the environment. Specifically, given conditions (i) the initial weights for $\psi$ are close to the ground truth as shown in (20), (ii) the replay buffer is large enough as in (21), and (iii) the distribution shift between the behavior policy and optimal policy is bounded (as shown in Remark), the learned parameters from Algorithm (1) for task 1, $\psi_1(\Theta_1)$ and $w_1$, converge to the ground truth $\psi^*_1$ and $w^*_1$ as in (22), indicating that the learned Q function converges to the optimal Q function as in (23). **Theorem 1** (Convergence analysis of SF-DQN without GPI). Suppose the assumptions in Section 4.2 hold and the initial neuron weights of the SF of task 1 satisfy $$\frac{\|\Theta^{(0)}_1 - \Theta^*_1\|_F}{\|\Theta^*_1\|_F} \leq (1 - c_N)\cdot \frac{\rho_1}{K^2},$$ for some positive $c_N$. When we select the step size as $\eta_t = \frac{1}{t+1}$, and the size of the replay buffer is $$N = \Omega(c_N^2\rho_1^{-1}\cdot K^2\cdot L^2d\log q),$$ Then, with the high probability of at least $1 - q^{-d}$, the weights $\theta^{(T)}$ from Algorithm 1 satisfy $$\|\Theta^{(T)}_1 - \Theta^*_1\|_2 \leq \frac{C_1 + C^*\cdot \|w^{(0)}_1 - w^*_1\|_2}{(1 - \gamma - c_N)(1 - \gamma)\rho_1 - C^*} \cdot \frac{\log^2 T}{T},$$ $$\|w^{(T)}_1 - w^*_1\|_2 \leq \left(1 - \frac{\rho_2}{\phi_{\max}}\right)^T \|w^{(0)}_1 - w^*_1\|_2,$$ where $C_1 = (2 + \gamma)\cdot R_{\max}$, and $C^* = |A|\cdot R_{\max}\cdot (1 + \log\nu\lambda^{-1} + \frac{1}{1-\nu})\cdot C$. Specifically, the learned Q-function satisfies $$\max_{s,a} |Q_1 - Q^*_1| \leq \frac{C_1 + \|w^{(0)}_1 - w^*_1\|_2}{(1 - \gamma - c_N)(1 - \gamma)\rho_1 - 1} \cdot \frac{\log^2 T}{T} + \|w^{(0)}_1 - w^*_1\|_2R_{\max}\left(1 - \frac{\rho_2}{\phi_{\max}}\right)^T.$$ **Remark 1** (upper bound of $C$): To ensure the meaningfulness of the upper bound in (23), specifically that the denominator needs to be greater than 0, $C$ has an explicit upper bound as $C \leq \frac{(1 - \gamma - c_N)(1 - \gamma)\rho_1}{|A|\cdot R_{\max}}$. Considering the definition of $C$ in Assumption 3, it implies that the difference between the behavior policy and the optimal policy is bounded. In other words, the fraction of bad tuples in the collected samples is constrained. **Remark 2** (Initialization): Note that (20) requires a good initialization. Firstly, it is still a state-of-the-art practice in analyzing Q-learning via deep neural network approximation. Secondly, according to the NTK theory (Jacot et al., 2018), there always exist some good local minima, which is almost as good as the global minima, near some random initialization. Finally, such a good initialization can also be adapted from some pre-trained models. ### 4.3.2 Improved Performance with Generalized Policy Improvement Theorem 2 establishes that the estimated Q function converges towards the optimal solution with the implementation of GPI as shown in (25), leveraging the prior knowledge learned from previous tasks. The enhanced performance associated with GPI finds its expression as $q^*$ defined in (24). Notably, when tasks $i$ and $j$ exhibit a higher degree of correlation, meaning that the distance between $w^*_i$ and $w^*_j$ for tasks $i$ and $j$ is minimal, we can observe a more substantial enhancement by employing GPI in the process of transferring knowledge from task $i$ to task $j$ from (25). **Theorem 2** (Convergence analysis of SF-DQN with GPI). Let us define $$q^* = \frac{(1 + \gamma)R_{\max}}{1 - \gamma} \cdot \min_{1 \leq i < j - 1} \frac{\|w^*_i - w^*_j\|_2}{\|\Theta^{(0)}_j - \Theta^*_j\|_2}.$$ Then, with the probability of at least $1 - q^{-d}$, the neuron weights $\Theta^{(T)}_j$ for the $j$-th task satisfy $$\|\Theta^{(T)}_j - \Theta^*_j\|_2 \leq \frac{C_1 + C^*\cdot \|w^{(0)}_j - w^*_j\|_2}{(1 - \gamma - c_N)(1 - \gamma)\rho_1 - \min\{q^*, 1\}\cdot C^*} \cdot \frac{\log^2 T}{T}.$$ **Remark 3** (Improvement via GPI): Utilizing GPI enhances the convergence rate from in the order of $\frac{1}{1-C^*}\cdot \frac{1}{T}$ to the order of $\frac{1}{1-q^*\cdot C^*}\cdot \frac{1}{T}$. When the distance between the source task and target tasks is small, $q^*$ can approach zero, indicating an improved generalization error by a factor of $1 - C^*$, where $C^*$ is proportional to the fraction of bad tuples. The improvement achieved through GPI is derived from the reduction of the distance between the behavior policy and the optimal policy, subsequently decreasing the fraction of bad tuples in the collected data. Here, $C^*$ is proportional to the fraction of bad tuples without using GPI, and $q^*\cdot C^*$ is proportional to the fraction of bad tuples when GPI is employed. 4.3.3 Bounds for Transfer Reinforcement Learning From Theorems 1 and 2, we have successfully estimated $Q_{n+1}^*$ for task $i$ using our proposed SF-DQN. When the reward changes to $r_{n+1}(s, a, s') = \phi^\top(s, a, s')w_{n+1}^*$ for a new task $T_{n+1}$, as long as we have estimated $w_{n+1}^*$, we can calculate the estimated Q-value function for $T_{n+1}$ simply by setting $$Q_{n+1}^*(s, a) = \max_{1 \leq j \leq n} \psi(\Theta_j^{(T)}; s, a)w_{n+1}^*.$$ (26) As $w_{n+1}^*$ experiences linear convergence to its optimal $w^*$, which is significantly faster than the sublinear convergence of $\Theta_{n+1}^{(t)}$, as shown in (22), this derivation of $Q_{n+1}$ in (26) simplifies the computation of $\Theta_{n+1}$ into a much more manageable supervised setting for approximating $w_{n+1}^*$ with only a modest performance loss as shown in (27). This is demonstrated in the following Theorem 3. **Theorem 3** (Transfer learning via SF-DQN). For the $(n+1)$-th task with $r_{n+1} = \phi^\top w_{n+1}^*$, suppose the Q-value function is derived based on (26), we have $$\max |Q_{n+1}^* - Q_{n+1}^*| \leq \frac{1 + \gamma}{1 - \gamma} \phi_{\text{max}} \min_{j \in [n]} \|w_j^* - w_{n+1}^*\|_2 + \frac{\|w_{n+1}^*\|_2}{(1 - \gamma) \cdot T}. $$ (27) **Remark 4** (Connection with existing works): The second term of the upper bound in (27), $\frac{\|w_{n+1}^*\|_2}{(1 - \gamma) \cdot T}$, can be explained as $\epsilon$ in Barreto et al. (2017), which results from the approximation error of the optimal Q-functions in the previous tasks. Without the SF decomposition as shown in (7), one can apply a similar strategy in (26) for DQN as $$Q_{n+1}^*(s, a) = \max_{1 \leq j \leq n} Q(\omega_j^{(T)}; s, a). $$ (28) In Theorem 4, (29) illustrates the performance of (28) through DQN. Compared to Theorem 3, transfer learning via DQN is worse than that via SF-DQN by a factor of $\frac{1 + \gamma}{2}$ when comparing the estimation error of the optimal function $Q_{n+1}^*$ in (27) and (29), indicating the advantages of using SFs in transfer reinforcement learning. **Theorem 4** (Transfer learning via DQN). For the $(n+1)$-th task with $r_{n+1} = \phi \cdot w_{n+1}^*$, suppose the Q-value function is derived based on (28), we have $$\max |Q_{n+1}^* - Q_{n+1}^*| \leq \frac{2}{1 - \gamma} \phi_{\text{max}} \min_{j \in [n]} \|w_j^* - w_{n+1}^*\|_2 + \frac{\|w_{n+1}^*\|_2}{(1 - \gamma) \cdot T}. $$ (29) **Remark 5** (Improvement by a factor of $\frac{1 + \gamma}{2}$): Transfer learning performance in SF-DQN is influenced by the knowledge gap between previous and current tasks, primarily attributed to differences in rewards and data distribution. In SF-DQN, the impact of reward differences is relatively small since $\phi$ that plays the role of reward remains fixed. The parameter $\gamma$ affects the influence of data distribution differences. A small $\gamma$ prioritizes immediate rewards, thereby the impact of data distribution on the knowledge gap is not significant. With a small $\gamma$, the impact of reward difference dominates, resulting in a high gap between SF-DQN and DQN in transfer learning. 4.4 Technical Challenges, Comparison with Existing Works Beyond deep learning theory: Challenges in deep reinforcement learning. The proof of Theorem 1 is inspired from the convergence analysis of one-hidden-layer neural networks within the (semi-)supervised learning domain (Zhong et al., 2017; Zhang et al., 2022). This proof tackles two primary objectives: i) the first objective involves characterizing the local convex region of the objective functions presented in (12) and (9); ii) the second objective focuses on quantifying the distance between the gradient defined in (15) and the gradient of the objective functions in (12) and (9). However, extending this approach from the (semi-)supervised learning setting to the deep reinforcement learning domain introduces additional challenges. First, we expand our proof beyond the scope of one-hidden-layer neural networks to encompass multi-layer neural networks. This extension requires new technical tools for characterizing the Hessian matrix and concentration bounds, as outlined in Appendix F.1. Second, the approximation error bound deviates from the supervised learning scenarios due to several factors: the non-i.i.d. of the collected data, the distribution shift between the behavior policy and the optimal policy, and the approximation error incurred when utilizing (16) to estimate (12). Addressing these challenges requires developing supplementary tools, as mentioned in Lemma 7. Notably, this approximation does not exhibit scaling behavior proportional to $\|\Theta_i - \Theta_i^*\|_2$, resulting in a sublinear convergence rate. Beyond DQN: challenges in GPI. The major challenges in proving Theorems 2-4 centers on deriving the improved performance by utilizing GPI. The intuition is as follows. Imagine we have two closely related tasks, labeled as $i$ and $j$, with their respective optimal weight vectors, $w_i^*$ and $w_j^*$, being close to each other. This closeness suggests that these tasks share similar rewards, leading to a bounded distributional shift in the data, which, in turn, implies that their optimal Q-functions should exhibit similarity. To rigorously establish this intuition, we aim to characterize the distance between these optimal Q-functions, denoted as $|Q_i^* - Q_j^*|$, in terms of the Euclidean distance between their optimal weight vectors, $||w_i^* - w_j^*||_2$ (See details in Appendix G). Furthermore, we can only estimate the optimal Q-function for previous tasks during the learning process, and such an estimation error accumulates in the temporal difference learning, e.g., the case of the SF learning of $\psi^*$. We need to develop novel analytical tools to quantify the error accumulating in the temporal difference learning (see details in Appendix C), which is unnecessary for supervised learning problems. 5 EXPERIMENTS This section summarizes empirical validation for the theoretical results obtained in Section 4 using a synthetic RL benchmark environment. The experiment setup and additional experimental results for real-world RL benchmarks are summarized in Appendix E. Convergence of SF-DQN with varied initialization. Figure 1 shows the performance of Algorithm 1 with different initial $w_1^{(0)}$ to the ground truth $w_1^*$. When the initialization is close to the ground truth, we observe an increased accumulated reward, which verifies our theoretical findings in (23) that the estimation error of the optimal Q-function reduces as $||w_1^{(0)} - w_1^*||_2$ decreases. ![Figure 1: Performance of SF-DQN presented in Algorithm 1 on Task 1.](image1) ![Figure 2: Transfer comparison for SF-DQN and DQN (with GPI)](image2) Performance of SF-DQN with GPI when adapting to tasks with varying relevance. We conducted experiments to investigate the impact of GPI with varied task relevance. Since the difference in reward mapping impacts data distribution shift, rewards, and consequently the optimal Q-function, we utilize the metric $||w_1^* - w_2^*||_2$ to measure the task irrelevance. The results summarized in Table 2 demonstrate that when tasks are similar (i.e., small $||w_1^* - w_2^*||_2$), SF-DQN with GPI consistently outperforms its counterpart without GPI. However, when tasks are dissimilar (i.e., large $||w_1^* - w_2^*||_2$), both exhibit same or similar performance, indicating that GPI is ineffective when two tasks are irrelevant. The observations in Table 2 validate our theoretical findings in (25), showing a more significant improvement in using GPI as $||w_1^* - w_2^*||_2$ decreases. | $||w_1^* - w_2^*||_2$ | = 0.01 | = 0.1 | = 1 | = 10 | |---------------------|--------|-------|-----|------| | SF-DQN (w/ GPI) | 0.986 ± 0.007 | 0.965 ± 0.007 | 0.827 ± 0.008 | 0.717 ± 0.012 | | SF-DQN (w/o GPI) | 0.942 ± 0.004 | 0.911 ± 0.013 | 0.813 ± 0.009 | 0.707 ± 0.011 | Comparison of the SF-DQN agent and DQN agent. From Figure 2, it is evident that the SF-DQN agent consistently achieves a higher average reward (task 2) than the DQN when starting training on task 2, where transfer learning occurs. These results strongly indicate the improved performance of the SF-DQN agent over the DQN, aligning with our findings in (27) and (29). SF-DQN benefits from reduced estimation error of the optimal Q-function compared to DQN when engaging in transfer reinforcement learning for relevant tasks. 6 CONCLUSION This paper analyzes the transfer learning performance of SF & GPI, with SF being learned using deep neural networks. Theoretically, we present a convergence analysis of our proposed SF-DQN with generalization guarantees and provide theoretical justification for its superiority over DQN without using SF in transfer reinforcement learning. We further verify our theoretical findings through numerical experiments conducted in both synthetic and benchmark RL environments. Future directions include exploring the possibility of learning $\phi$ using a DNN approximation and exploring the combination of successor features with other deep reinforcement learning algorithms. REFERENCES Zeyuan Allen-Zhu and Yuanzhi Li. Feature purification: How adversarial training performs robust deep learning. In *2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS)*, pp. 977–988. IEEE, 2022. Pierre-Luc Bacon, Jean Harb, and Doina Precup. The option-critic architecture. In *Proceedings of the AAAI conference on artificial intelligence*, volume 31, 2017. Ainesh Bakshi, Rajesh Jayaram, and David P Woodruff. Learning two layer rectified neural networks in polynomial time. In *Conference on Learning Theory*, pp. 195–268. PMLR, 2019. André Barreto, Will Dabney, Rémi Munos, Jonathan J Hunt, Tom Schaul, Hado P van Hasselt, and David Silver. Successor features for transfer in reinforcement learning. *Advances in neural information processing systems*, 30, 2017. Andre Barreto, Diana Borsa, John Quan, Tom Schaul, David Silver, Matteo Hessel, Daniel Mankowitz, Augustin Zidek, and Remi Munos. Transfer in deep reinforcement learning using successor features and generalised policy improvement. In *International Conference on Machine Learning*, pp. 501–510. PMLR, 2018. Dimitri Bertsekas and John N Tsitsiklis. *Neuro-dynamic programming*. Athena Scientific, 1996. Jalaj Bhandari, Daniel Russo, and Raghav Singal. A finite time analysis of temporal difference learning with linear function approximation. In *Conference on learning theory*, pp. 1691–1692. PMLR, 2018. Rajendra Bhatia. *Matrix analysis*, volume 169. Springer Science & Business Media, 2013. Alon Brutzkus and Amir Globerson. An optimization and generalization analysis for max-pooling networks. In *Uncertainty in Artificial Intelligence*, pp. 1650–1660. PMLR, 2021. Qi Cai, Zhuoran Yang, Jason D Lee, and Zhaoran Wang. Neural temporal-difference learning converges to global optima. *Advances in Neural Information Processing Systems*, 32, 2019. Antonio Coronato, Muddasar Naeem, Giuseppe De Pietro, and Giovanni Paragliola. Reinforcement learning for intelligent healthcare applications: A survey. *Artificial Intelligence in Medicine*, 109: 101964, 2020. Peter Dayan. Improving generalization for temporal difference learning: The successor representation. *Neural computation*, 5(4):613–624, 1993. Kefan Dong, Jiaqi Yang, and Tengyu Ma. Provable model-based nonlinear bandit and reinforcement learning: Shelve optimism, embrace virtual curvature. *Advances in Neural Information Processing Systems*, 34:26168–26182, 2021. Simon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. In *International Conference on Learning Representations*, 2018. Simon S. Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. In *International Conference on Learning Representations*, 2019. URL https://openreview.net/forum?id=SleK3i09YQ. Simon S Du, Jason D Lee, Gaurav Mahajan, and Ruosong Wang. Agnostic $q$-learning with function approximation in deterministic systems: Near-optimal bounds on approximation error and sample complexity. *Advances in Neural Information Processing Systems*, 33:22327–22337, 2020. Jianqing Fan, Zhaoran Wang, Yuchen Xie, and Zhuoran Yang. A theoretical analysis of deep q-learning. In *Learning for Dynamics and Control*, pp. 486–489. PMLR, 2020. Rong Ge, Jason D. Lee, and Tengyu Ma. Learning one-hidden-layer neural networks with landscape design. In *International Conference on Learning Representations*, 2018. URL https://openreview.net/forum?id=BkwHObbRZ.
xC8xh2RSs2
The paper uses exact keyword matching to identify corresponding subsections. Thus it's hard to know the proportion of dataset cards which covers the corresponding subsection but with different keywords.
Navigating Dataset Documentations in AI: A Large-Scale Analysis of Dataset Cards on Hugging Face Xinyu Yang * Cornell University xy468@cornell.edu Weixin Liang* Stanford University wxliang@stanford.edu James Zou Stanford University jamesz@stanford.edu Abstract Advances in machine learning are closely tied to the creation of datasets. While data documentation is widely recognized as essential to the reliability, reproducibility, and transparency of ML, we lack a systematic empirical understanding of current dataset documentation practices. To shed light on this question, here we take Hugging Face – one of the largest platforms for sharing and collaborating on ML models and datasets – as a prominent case study. By analyzing all 7,433 dataset documentation on Hugging Face, our investigation provides an overview of the Hugging Face dataset ecosystem and insights into dataset documentation practices, yielding 5 main findings: (1) The dataset card completion rate shows marked heterogeneity correlated with dataset popularity: While 86.0% of the top 100 downloaded dataset cards fill out all sections suggested by Hugging Face community, only 7.9% of dataset cards with no downloads complete all these sections. (2) A granular examination of each section within the dataset card reveals that the practitioners seem to prioritize Dataset Description and Dataset Structure sections, accounting for 36.2% and 33.6% of the total card length, respectively, for the most downloaded datasets. In contrast, the Considerations for Using the Data section receives the lowest proportion of content, accounting for just 2.1% of the text. (3) By analyzing the subsections within each section and utilizing topic modeling to identify key topics, we uncover what is discussed in each section, and underscore significant themes encompassing both technical and social impacts, as well as limitations within the Considerations for Using the Data section. (4) Our findings also highlight the need for improved accessibility and reproducibility of datasets in the Usage sections. (5) In addition, our human annotation evaluation emphasizes the pivotal role of comprehensive dataset content in shaping individuals’ perceptions of a dataset card’s overall quality. Overall, our study offers a unique perspective on analyzing dataset documentation through large-scale data science analysis and underlines the need for more thorough dataset documentation in machine learning research. 1 Introduction Datasets form the backbone of machine learning research [Koch et al., 2021]. The proliferation of machine learning research has spurred rapid advancements in machine learning dataset development, validation, and real-world deployment across academia and industry. Such growing availability of ML datasets underscores the crucial role of proper documentation in ensuring transparency, reproducibility, and data quality in research [Haibe-Kains et al., 2020; Stodden et al., 2018; Hutson, 2018]. Documentation provides details about the dataset, including sources of data, methods used to collect it, and preprocessing or cleaning that was performed. This information holds significant value for dataset users, as it facilitates a quick understanding of the dataset’s motivation and its overall scope. These insights are also crucial for fostering responsible data sharing and promoting interdisciplinary collaborations. *These authors contributed equally to this work. Despite numerous studies exploring the structure and content of dataset cards across various research domains (Afzal et al., 2020; Gebru et al., 2021; Papakyriakopoulos et al., 2023; Barman et al., 2023; Costa-jussà et al., 2020), there remains a notable gap in empirical analyses of community norms and practices for dataset documentation. This knowledge gap is significant because adherence to community norms and the quality of dataset documentation directly impact the transparency, reliability, and reproducibility in the field of data-driven research. For instance, inadequate dataset descriptions, structural details, or limitations can hinder users from utilizing the dataset appropriately, potentially resulting in misuse or unintended consequences; the absence of information on data cleaning and readiness assessment practices in data documentation limits dataset reusability and productivity gains. Furthermore, without a systematic analysis of current dataset documentation practices, we risk perpetuating insufficient documentation standards, which can impede efforts to ensure fairness, accountability, and equitable use of AI technologies. To address this question, we conducted a comprehensive empirical analysis of dataset cards hosted on Hugging Face, one of the largest platforms for sharing and collaborating on ML models and datasets, as a prominent case study. Dataset cards on the Hugging Face platform are Markdown files that serve as the README for a dataset repository. While several open-source platforms also facilitate the sharing of ML datasets, such as Kaggle, Papers with Code, and GitHub, we chose Hugging Face for two primary reasons. Firstly, it stands out as one of the most popular platforms for developers to publish, share, and reuse ML-based projects, offering a vast repository of ML datasets for study. Secondly, Hugging Face is one of the few open-source platforms that offer an official dataset card template. This feature not only enhances the accessibility and user-friendliness of the dataset card community but also makes the analysis process more efficient and informative. By analyzing all 7,433 dataset documentation hosted on Hugging Face, our investigation provides an overview of the Hugging Face dataset ecosystem and insights into dataset documentation practices. Based on our research findings, we emphasize the importance of comprehensive dataset documentation and offer suggestions to practitioners on how to write documentation that promotes reproducibility, transparency, and accessibility of their datasets, which can help to improve the overall quality and usability of the dataset community. Our study aims to bridge the notable gap in the community concerning data documentation norms, taking the first step toward identifying deficiencies in current practices and offering guidelines for enhancing dataset documentation. Figure 1: Systematic Analysis of 24,065 Datasets Hosted on Hugging Face. (a) Exponential Growth of Datasets: The Hugging Face platform has seen a remarkable surge in the number of datasets, with the count doubling approximately every 18 weeks. (b) Power Law in Dataset Usage: Dataset downloads on Hugging Face follow a power-law distribution, as indicated by the linear relationship on the log-log plot. The top 82 datasets account for 80% of the total downloads; datasets with documentation dominate the top downloaded datasets. (c) Documentation Associated with Usage: Despite only 30.9% of dataset repositories (7,433 out of 24,065) featuring non-empty dataset cards, these datasets account for an overwhelming 95.0% of total download traffic on the platform. 2 OVERVIEW Finding - **Exponential Growth of Datasets:** The number of datasets on Hugging Face doubles every 18 weeks. - **Documentation Associated with Usage:** 95.0% of download traffic comes from the 30.9% of datasets with documentation. Exponential Growth of Datasets Our analysis encompasses 24,065 dataset repositories on Hugging Face uploaded by 7,811 distinct user accounts as of March 16th, 2023 (see Table S5 for varying documentation practices by creators). The number of datasets exhibits exponential growth, with a weekly growth rate of 3.97% and a doubling time of 18 weeks (Fig. 1a). As a sanity check, the number of dataset repositories reached 35,973 by May 23rd, 2023, confirming the exponential trend. Power Law in Dataset Usage Although Hugging Face has seen a significant increase in the number of dataset repositories, our analysis reveals a significant imbalance in dataset downloads, which follows a power law distribution. This means that a small proportion of the most popular datasets receive the majority of the downloads, while the vast majority of datasets receive very few downloads. In fact, our analysis shows that just the 82 datasets with the most downloads account for 80% of total downloads (Fig. 1b). Fig. S4 further demonstrates that the power law distribution persists across various task domains, even with the varied number of datasets within each domain. Documentation Associated with Usage Despite the importance of dataset cards, only 58.2% (14,011 out of 24,065 dataset repositories contributed by 4,782 distinct user accounts) include dataset cards as Markdown README.md files within their dataset repositories. Among these, 6,578 dataset cards are empty, resulting in only 30.9% (7,433 out of 24,065 dataset repositories contributed by 1,982 distinct user accounts) featuring non-empty dataset cards (Fig. 1c). As illustrated in Fig. 1d, dataset cards are prevalent among the most downloaded datasets. Notably, datasets with non-empty dataset cards account for 95.0% of total download traffic, underscoring a potential positive correlation between dataset cards and dataset popularity. For the rest of the paper, we focus our analyses on these 7,433 non-empty dataset cards. We sort these non-empty dataset cards based on the number of downloads for the corresponding datasets. So top $k$ dataset cards (e.g. $k = 100$) refer to the dataset cards corresponding to the $k$ most downloaded datasets. 3 STRUCTURE OF DATASET DOCUMENTATIONS Finding - **The dataset card completion rate shows marked heterogeneity correlated with dataset popularity:** While 86.0% of the top 100 downloaded datasets fill out all sections suggested by the Hugging Face community, only 7.9% of dataset cards with no downloads complete all these sections. | Section Title | Subsection Title | Description | |------------------------|-----------------------------------|-----------------------------------------------------------------------------| | Dataset Description | Dataset Summary | A brief summary of the dataset, including its intended use, supported tasks, an overview of how and why the dataset was created, etc. | | | Supported Tasks and Leaderboards | Brief description of the tag, metrics, and suggested models of the dataset. | | | Languages | The languages represented in the dataset. | | Dataset Structure | Data Instances | JSON-formed example and description of a typical instance in the dataset. | | | Data Fields | List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. | | | Data Splits | Criteria for splitting the data; descriptive statistics for the features, such as size, average length, etc. | | Dataset Creation | Curation Rationale | Motivation for the creation of the dataset. | | | Source Data | The source of the data (e.g. news text and headlines, social media posts, translated sentences, etc.), including the data collection process, and data producer. | | | Annotations | Annotation process, annotation tools, annotators, etc. | | | Personal and Sensitive Information| Statement of whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, financial or health data, etc.). | | Considerations for Using the Data | Social Impact of Dataset | Discussion of the ways the use of the dataset will impact society. | | | Discussion of Biases | Descriptions of specific biases that are likely to be reflected in the data. | | | Other Known Limitations | Other limitations of the dataset, like annotation artifacts. | | Additional Information | Dataset Curators | The people involved in collecting the dataset and their affiliation(s). | | | Licensing Information | The license and link to the license webpage if available. | | | Citation Information | The BibTeX-formatted reference for the dataset. | | | Contributions | ‘Thanks to @github-username for adding this dataset.’ | Table 1: Community-Endorsed Dataset Card Structure. This table shows the sections and their suggested subsections provided by the Hugging Face community, along with their descriptions. For more information, please refer to [https://github.com/huggingface/datasets/blob/main/templates/README_guide.md](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md). Community-Endorsed Dataset Card Structure Grounded in academic literature (Mitchell et al., 2019) and official guidelines from Hugging Face (HuggingFace, 2021), the Hugging Face community provides suggestions for what to write in each section. This community-endorsed dataset card provides a standardized structure for conveying key information about datasets. It generally contains 5 sections: Dataset Description, Dataset Structure, Dataset Creation, Considerations for Using the Data, and Additional Information (Table 1). To examine the structure of dataset cards, we used a pipeline that detects exact word matches for each section title. We then identified the section titles and checked whether they had contents (Appendix B.1). If a dataset card had all five sections completed, we considered it to be following the community-endorsed dataset card. Adherence to Community-Endorsed Guidelines Correlates with Popularity Our evaluation found that popular datasets have better adherence to the dataset card community-endorsed dataset card structure. As illustrated in Fig. 2, compliance with the template varies significantly among datasets with different download counts. Among the 7,433 dataset cards analyzed, 86.0% of the top 100 downloaded dataset cards have completed all five sections of the community-endorsed dataset card, while only 7.9% of dataset cards with no downloads follow it. Fig. S5 further reveals that popular dataset cards achieve higher completion in all Hugging Face-recommended sections. This implies a potential correlation between adherence to community-endorsed guidelines and dataset popularity. 4 Practitioners Emphasize Description and Structure Over Social Impact and Limitations Finding • Practitioners seem to prioritize on Dataset Description and Dataset Structure sections, which account for 36.2% and 33.6% of the total card length, respectively, on the top 100 most downloaded datasets. • In contrast, the Considerations for Using the Data section receives the lowest proportion of content, just 2.1%. The Considerations for Using the Data section covers the social impact of datasets, discussions of biases, and limitations of datasets. Social Impact, Dataset Limitations and Biases are Lacking in Most Documentations Following the community-endorsed dataset card, we conducted an analysis to determine the level of emphasis placed on each section. Fig. 3b shows the word count distribution among the top 100 downloaded dataset cards, revealing their high level of comprehensiveness: 91.0% of them have a word count exceeding 200. We step further into these dataset cards to examine the emphasis placed on each section. We calculated the word count of each section and its proportion to the entire dataset card. As shown in Fig. 3c, the Dataset Description and Dataset Structure sections received the most attention, accounting for 36.2% and 33.6% of the dataset card length, respectively. On the other hand, the Considerations for Using the Data section received a notably low proportion of only 2.1%. Section Length Reflects Practitioner Attention The length of sections within dataset cards is reflective of practitioner attention, and it varies significantly based on the popularity of the dataset. Highly downloaded datasets tend to have more comprehensive and longer dataset cards (Fig. 3a), with an emphasis on the Dataset Description and Dataset Structure sections (Fig. 3d). Conversely, less popular datasets have shorter cards (Fig. 3b) with a greater emphasis on the Additional Information section (Fig. 3f). Despite this, sections such as Dataset Creation and Considerations for Using the Data consistently receive lower attention, regardless of download rates (Fig. 3f). This suggests a need to promote more comprehensive documentation, particularly in critical sections, to enhance dataset usage and facilitate ethical considerations. Figure 3: Section Length Reflects Practitioner Attention. (a) Popularity Correlates with Documentation Length: The top downloaded dataset cards are longer, indicating that they contain more comprehensive information. (b) Distribution of Word Count Among Top 100 Downloaded Dataset Cards (c) Section Length Proportions in Top 100 Downloaded Dataset Cards: The Dataset Description and Dataset Structure sections dominate in the top 100 downloaded dataset cards, with proportions of 36.2% and 33.6%, respectively. In contrast, the Considerations for Using the Data section receives the least attention, with a proportion of only 2.1%. (d) Section Length Proportion Changes over Downloads: The section length proportion changes over downloads, with Dataset Description and Dataset Structure decreasing in length, and Additional Information and Other increasing. Notably, there is a consistently low emphasis placed on the Dataset Creation and Considerations for Using the Data sections across all dataset cards with different downloads. 5 UNDERSTANDING CONTENT DYNAMICS IN DATASET DOCUMENTATION Finding • Strong Community Adherence to Subsection Guidelines: Practitioners contributing to the Hugging Face community exhibit high compliance with standards, filling out 14 of the 17 recommended subsections across five main sections at a rate exceeding 50%. • Emergence of the Usage Section Beyond the Community Template: Surprisingly, 33.2% of dataset cards includes a Usage section. The community template does not include such Usage section in its current form and should include one in the future. Section Content Detection Pipeline To gain a deeper understanding of the topics discussed in each section, we conducted a content analysis within each section of the community-endorsed dataset card structure, which includes suggested subsections within the five main sections. We used exact keyword matching to identify the corresponding subsections and calculate their filled-out rates. Fig. 4 shows that 14 out of 17 subsections have filled-out rates above 50%, indicating adherence to the community-endorsed dataset cards. Limitation Section is Rare, but Long if it Exists The Considerations for Using the Data section (i.e., limitation section), despite being frequently overlooked and often left empty by practitioners, holds particular significance. When this section is included, it tends to adhere well to community guidelines, with subsections having a completion rate exceeding 50% and a reasonably substantial word count (98.2 words). This suggests that this section has the potential to provide valuable insights and guidance. This motivates our use of topic modeling to identify key discussion topics within this section, potentially aiding practitioners in crafting meaningful content. Figure 4: Highlighting the Hugging Face Community’s Compliance with Subsection Guidelines. This figure shows subsection filled-out rates within different sections, stratified by download counts. Each section has multiple subsections, with bars representing the filled-out rate of each subsection. Green texts indicate filled-out rates above 50%, while red texts indicate rates below 50%. Of the 17 subsections within the five sections of the community-endorsed dataset, 14 have filled-out rates above 50%. | Topic | Representative Sentences | |-------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------| | Technical or Research Scope | • Adding a Spanish resource may help others to improve their research and educational activities. | | | • The creation of the dataset contributes to expanding the scope of NLP research to under-explored languages across the world. | | Social Scope or Background | • This dataset can be used to gain insights into the social, cultural, and political views of people in African countries. | | | • If this matter isn’t tackled with enough urgency, we might see the rise of a new dark era in Latin America politics, where many unscrupulous parties and people will manage to gain power and control the lives of many people. | | Topic | Representative Sentences | |-------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------| | Subpopulation Biases | • Gender speakers distribution is imbalanced, percentage of female speakers is mostly lower than 50% across languages. | | | • The social biases of the time in terms of race, sex, gender, etc. might be encountered in this dataset. | | Biases from Collection Procedure | • With respect to the potential risks, we note that the subjectivity of human annotation would impact on the quality of the dataset. | | | • In terms of data collection, by using keywords and user mentions, we are introducing some bias to the data, restricting our scope to the list of keywords and users we created. | | Topic | Representative Sentences | |-------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------| | Data Quality | • The nature of the task introduce a variability in the quality of the target translations. | | | • A number of errors, omissions and inconsistencies are expected to be found within the corpus. | | Processing Limitation | • Our augmentation process can sometimes create nonexistent versions of real people. | | | • Satellite annotation is not as accurate for pixel-level representation due to single-point annotations. | Figure 5: Key Topics in Considerations for Using the Data through Topic Modeling Analysis. This figure displays the outcomes of the topic modeling assessment on the contents of the (a) Social Impact of Dataset Subsection, (b) Discussion of Biases Subsection, and (c) Other Known Limitations Subsection. Each panel illustrates the human-assigned topic label and representative sentences for each section. Topics are generated by Latent Dirichlet Allocation (LDA). Limitation Section Covers Diverse and Crucial Topics The Considerations for Using the Data section (i.e., limitation section) encompasses diverse and crucial topics. The Hugging Face community emphasizes three major themes within this section: Social Impact of Dataset, Discussion of Biases, and Other Known Limitations. The Social Impact of Dataset aspect explores not only societal implications but also the potential benefits to technology and research communities. In this section, practitioners discuss issues like how the dataset can expand the scope of NLP research (Armstrong et al., 2022), and increase access to natural language technology across diverse regions and cultures (Tache et al., 2021). Additionally, the subsection covers sensitive topics related to politics, ethics, and culture within the social scope. **Discussion of Biases** delves into subpopulation bias and data collection biases, highlighting the importance of addressing bias-related issues. Previous research have identified numerous technical and social biases such as subgroup bias (Buolamwini & Gebru, 2018), data collection bias (Wang et al., 2019), and label bias (Jiang & Nachum, 2020). Our topic modeling results reveal that two primary biases are discussed by practitioners in this subsection. The first is subpopulation bias, which includes biases related to gender, age, or race. For instance, an audio dataset (Nsoesie & Galea, 2022) notes that female speakers are underrepresented, comprising less than 50% of the dataset. The second major bias arises from the data collection process, specifically the annotation process, which is often a significant bottleneck and source of errors. Lastly, **Other Known Limitations** focuses on technical limitations, particularly data quality and processing limitations. This comprehensive coverage underscores the multifaceted nature of considerations related to dataset usage. Data quality is often a focus in other disciplines, such as the social sciences and biomedicine, and there are many insights to draw upon (Paulada et al., 2021; Fedorov, 2010; Pan & Geerts, 2012). Meanwhile, processing limitations encompass a broader range of issues beyond biases from the collection procedure, such as inaccuracies or the absence of some data points. **Emergence of the Usage Section Beyond the Community Template** While Hugging Face’s community-endorsed dataset card structure comprises five main sections, there are instances where practitioners encounter valuable information that doesn’t neatly fit into these sections. These additional sections, referred to as **Other** sections, can contain important content. Notably, among these **Other** sections, discussions related to **Usage** emerge as a frequent (nearly one-third of the time, 33.2%) and significant theme. These **Usage** sections offer a diverse range of information, including details on downloading, version specifications, and general guidelines to maximize the dataset’s utility. This highlights the importance of considering content that falls outside the predefined template and suggests a potential area for improvement in dataset card templates. **Quantifying the Impact of Usage Section on Dataset Downloads** To assess the influence of a **Usage** section in dataset documentation, we conducted a counterfactual analysis experiment (Appendix, C). We trained a BERT (Devlin et al., 2018) model using dataset card content and download counts, which were normalized to fall within the range of [0, 1] for meaningful comparisons. When a dataset card that initially included a **Usage** section had this section removed, there was a substantial decrease of 1.85% in downloads, with statistical significance. This result underscores the significant impact of the **Usage** section in bolstering dataset accessibility and popularity, emphasizing its pivotal role in enhancing the documentation and usability of datasets. 6 ANALYZING HUMAN PERCEIVED DATASET DOCUMENTATION QUALITY **Finding** - Our human annotation evaluation emphasizes the pivotal role of comprehensive dataset content in shaping individuals’ perceptions of a dataset card’s overall quality. **Human Annotations for Comprehensive Evaluation of Dataset Card Quality** We utilized human annotations to evaluate the quality of dataset cards, considering seven distinct aspects, drawing from prior research in dataset documentation literature and the Hugging Face community-endorsed dataset card (Afzal et al., 2020; Gebru et al., 2021; Papakyriakopoulos et al., 2023; Barman et al., 2023; Costa-jussà et al., 2020): (1) Structural Organization, (2) Content Comprehensiveness, (3) Dataset Description, (4) Dataset Structure, (5) Dataset Preprocessing, (6) Usage Guidance, and (7) Additional Information. While Dataset Description, Dataset Structure, and Additional Information can be found in sections of community-endorsed dataset cards, we added evaluation aspects highlighted in the literature, like aspects that constitute the overall presentation (Structural Organization and Content Comprehensiveness), Data Preprocessing and Usage Guidance. To conduct this assessment, we randomly selected a subset containing 150 dataset cards and engaged five human annotators. These annotators were tasked with evaluating each dataset card across these seven aspects and providing an overall quality score within a range of 5 (Appendix B.2). The overall quality is assessed through the subjective perception of human annotators, taking into account the seven aspects as well as their overall impression. This evaluation approach aims to provide a comprehensive assessment of dataset card quality, reflecting the importance of these aspects in effective dataset documentation. **Human Perception of Documentation Quality Strongly Aligns with Quantitative Analysis** Human annotation evaluation of dataset cards shows varying scores across different aspects. While Dataset Description (2.92/5), Structural Organization (2.82/5), Data Structure (2.7/5), and Content Comprehensiveness (2.48/5) received relatively higher scores, areas like Data Preprocessing (1.21/5) and Usage Guidance (1.14/5) scored lower. This aligns with the quantitative analysis that indicates a greater emphasis on the Dataset Description and Dataset Structure sections. Notably, even the highest-scoring aspect, Dataset Description, falls below 60% of the highest possible score, indicating room for improvement in dataset documentation. **Content Comprehensiveness has the strongest positive correlation with the overall quality of a dataset card (Coefficient: 0.3935, p-value: 3.67E-07), emphasizing the pivotal role of comprehensive dataset content in shaping individuals’ perceptions of a dataset card’s overall quality. Additionally, aspects like Dataset Description (Coefficient: 0.2137, p-value: 3.04E-07), Structural Organization (Coefficient: 0.1111, p-value: 2.17E-03), Data Structure (Coefficient: 0.0880, p-value: 6.49E-03), and Data Preprocessing (Coefficient: 0.0855, p-value: 2.27E-03) also significantly contribute to people’s evaluations of dataset documentation quality. Moreover, the length of a dataset card is positively related to Content Comprehensiveness (p-value: 1.89E-011), reinforcing the importance of detailed documentation in enhancing dataset quality and usability.** ### 7 RELATED WORKS Dataset has long been seen as a significant constraint in the realm of machine learning research (Halevy et al., 2009; Sun et al., 2017). The process of creating datasets remains arduous and time-intensive, primarily due to the costs of curation and annotation (IBM, 2020). Moreover, the quality of data assumes a pivotal role in shaping the outcomes of machine learning research (Liang et al., 2022). Consequently, a profound understanding of datasets is indispensable in the context of machine learning research, and this understanding is most effectively conveyed through comprehensive dataset documentation. A long-standing problem in the literature is that there is no industry standard being formed about data documentation. Therefore, much existing work in the literature has been in exploring, conceptualizing and proposing different dataset documentation frameworks. Data-focused tools such as datasheets for datasets and data nutrition labels have been proposed to promote communication between dataset creators and users, and address the lack of industry-wide standards for documenting AI datasets (Bender & Friedman, 2018; Bender et al., 2021; Pushkarna et al., 2022; Gebru et al., 2021; Holland et al., 2018; Chmielinski et al., 2022; Papakyriakopoulos et al., 2023). Additionally, there are studies that concentrate on leveraging human-centered methods to scrutinize the design and evaluation aspects of dataset documentation (Fabris et al., 2022; Mahajan & Shaikh, 2021; Hanley et al., 2020; Hutiri et al., 2022). In the library domain, numerous works have proposed methods to tackle the absence of universally accepted guidelines for publishing library-linked data. These efforts are aimed at enhancing data quality, promoting interoperability, and facilitating the discoverability of data resources (Villazon-Terrazas et al., 2011; Hidalgo-Delgado et al., 2017; Abida et al., 2020). These tools and frameworks provide detailed information on the composition, collection process, recommended uses, and other contextual factors of datasets, promoting greater transparency, accountability, and reproducibility of AI results while mitigating unwanted biases in AI datasets. Additionally, they enable dataset creators to be more intentional throughout the dataset creation process. Consequently, datasheets and other forms of data documentation are now commonly included with datasets, helping researchers and practitioners to select the most appropriate dataset for their particular needs. Despite the proliferation of dataset documentation tools and the growing emphasis on them, the current landscape of dataset documentation remains largely unexplored. In this paper, we present a comprehensive analysis of AI dataset documentation on Hugging Face to provide insights into current dataset documentation practices. 8 DISCUSSION In this paper, we present a comprehensive large-scale analysis of 7,433 AI dataset documentation on Hugging Face. The analysis offers insights into the current state of adoption of dataset cards by the community, evaluates the effectiveness of current documentation efforts, and provides guidelines for writing effective dataset cards. Overall, our main findings cover 5 aspects: • **Varied Adherence to Community-Endorsed Dataset Card:** We observe that high-downloaded dataset cards tend to adhere more closely to the community-endorsed dataset card structure. • **Varied Emphasis on Sections:** Our analysis of individual sections within dataset cards reveals that practitioners place varying levels of emphasis on different sections. For instance, among the top 100 downloaded dataset cards, *Dataset Description* and *Dataset Structure* sections receive the most attention. In contrast, the *Considerations for Using the Data* section garners notably lower engagement across all downloads, with only approximately 2% of dataset cards containing this section. This discrepancy can be attributed to the section’s content, which involves detailing limitations, biases, and the societal impact of datasets – a more complex and nuanced endeavor. An internal user study conducted by Hugging Face ([HuggingFace](https://huggingface.co)) also identified the *Limitation* section within this category as the most challenging to compose. • **Topics Discussed in Each Section:** Our examination of subsections within each section of dataset cards reveals a high completion rate for those suggested by the Hugging Face community. This highlights the effectiveness of the community-endorsed dataset card structure. In particular, our study places a special focus on the *Considerations for Using the Data* section, employing topic modeling to identify key themes, including technical and social aspects of dataset limitations and impact. • **Importance of Including Usage Sections:** We observe that many dataset card creators go beyond the recommended structure by incorporating *Usage* sections, which provide instructions on effectively using the dataset. Our empirical experiment showcases the potential positive impact of these *Usage* sections in promoting datasets, underscoring their significance. • **Human Evaluation of Dataset Card Quality:** Our human evaluation of dataset card quality aligns well with our quantitative analysis. It underscores the pivotal role of Content Comprehensiveness in shaping people’s assessments of dataset card quality. This finding offers clear guidance to practitioners, emphasizing the importance of creating comprehensive dataset cards. Moreover, we establish a quantitative relationship between Content Comprehensiveness and the word length of dataset cards, providing a measurable method for evaluation. **Limitations and Future Works** Our analysis of ML dataset documentation relies on the distinctive community-curated resource, Hugging Face, which may introduce biases and limitations due to the platform’s structure and coverage. For example, Hugging Face’s NLP-oriented concentration could introduce biases into the dataset categories. However, our method is transferable and could easily be reproduced for another platform, facilitating future studies (Appendix E). Additionally, our analysis of completeness and informativeness is based on word count and topic modeling, which may not fully capture the nuances of the documentation. Furthermore, measuring dataset popularity based on downloads alone may not fully reflect the dataset’s impact. Future research could consider additional factors, such as the creation time of the dataset and research area of the dataset (Appendix D). Lastly, our human evaluation serves as a preliminary evaluation. Future analyses could involve a more diverse group of annotators with varying backgrounds and perspectives. **Research Significance** To summarize, our study uncovers the current community norms and practices in dataset documentation, and demonstrates the importance of comprehensive dataset documentation in promoting transparency, accessibility, and reproducibility in the AI community. We hope to offer a foundation step in the large-scale empirical analysis of dataset documentation practices and contribute to the responsible and ethical use of AI while highlighting the importance of ongoing efforts to improve dataset documentation practices. REPRODUCIBILITY STATEMENT We have assembled a collection of dataset cards as a community resource, which includes extracted metadata such as the number of downloads and textual analyses. This resource along with our analysis code can be accessed at https://github.com/YoungXinyu1802/HuggingFace-Dataset-Card-Analysis. The Hugging Face datasets can be accessed through the Hugging Face Hub API, which is available at https://huggingface.co/docs/huggingface_hub/package_reference/hf_api. ACKNOWLEDGMENTS We thank Yian Yin and Nazneen Rajani for their helpful comments and discussions. J.Z. is supported by the National Science Foundation (CCF 1763191 and CAREER 1942926), the US National Institutes of Health (P30AG059307 and U01MH098953) and grants from the Silicon Valley Foundation and the Chan-Zuckerberg Initiative. REFERENCES Rabeb Abida, Emna Hachicha Belghith, and Anthony Cleve. An end-to-end framework for integrating and publishing linked open government data. In 2020 IEEE 29th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE), pp. 257–262, 2020. doi: 10.1109/WETICE49692.2020.00057. Shazia Afzal, Rajmohan C, Manish Kesarwani, Sameep Mehta, and Hima Patel. Data readiness report, 2020. Ruth-Ann Armstrong, John Hewitt, and Christopher Manning. Jampatoinsli: A jamaican patois natural language inference dataset. arXiv preprint arXiv:2212.03419, 2022. Nabajeet Barman, Yuriy Reznik, and Maria Martini. Datasheet for subjective and objective quality assessment datasets, 2023. Emily M Bender and Batya Friedman. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587–604, 2018. Emily M Bender, Batya Friedman, and Angelina McMillan-Major. A guide for writing data statements for natural language processing, 2021. Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency, pp. 77–91. PMLR, 2018. Kasia S Chmielinski, Sarah Newman, Matt Taylor, Josh Joseph, Kemi Thomas, Jessica Yurkofsky, and Yue Chelsea Qiu. The dataset nutrition label (2nd gen): Leveraging context to mitigate harms in artificial intelligence. arXiv preprint arXiv:2201.03954, 2022. Marta R. Costa-jussà, Roger Creus, Oriol Domingo, Albert Domínguez, Miquel Escobar, Cayetana López, Marina Garcia, and Margarita Geleta. Mt-adapted datasheets for datasets: Template and repository, 2020. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018. URL http://arxiv.org/abs/1810.04805 Alessandro Fabris, Stefano Messina, Gianmaria Silvello, and Gian Antonio Susto. Tackling documentation debt: A survey on algorithmic fairness datasets. In Proceedings of the 2nd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, EAAMO ’22, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450394772. doi: 10.1145/3551624.3555286. URL https://doi.org/10.1145/3551624.3555286 Wenfei Fan and Floris Geerts. Foundations of data quality management. Synthesis Lectures on Data Management, 4(5):1–217, 2012.
Kz3yckpCN5
The implicit assumption of this work (revealed in the title) is that there exists a claim or understanding that imitating proprietary language models by sampling their outputs for training is all that is needed to achieve performance parity - however, I contend that this isn't the prevalent understanding.
THE FALSE PROMISE OF IMITATING PROPRIETARY LANGUAGE MODELS Arnav Gudibande*, Eric Wallace*, Charlie Snell* Xinyang Geng, Hao Liu, Pieter Abbeel, Sergey Levine, Dawn Song UC Berkeley {arnavg, ericwallace, csnell22}@berkeley.edu ABSTRACT An emerging method to cheaply improve a weaker language model is to finetune it on outputs from a stronger model, such as a proprietary system like ChatGPT (e.g., Alpaca, Self-Instruct, and others). In this work, we critically analyze this approach of imitating language models. We first finetune a series of LMs that imitate ChatGPT using varying base model sizes (1.5B–13B), data sources, and imitation data amounts (0.3M–150M tokens). We then evaluate the models using crowd raters and canonical NLP benchmarks. Initially, we were surprised by the output quality of our imitation models—they appear far better at following instructions, and crowd workers rate their outputs as competitive with ChatGPT. However, when conducting more targeted automatic evaluations, we find that imitation models close little to none of the gap from the base LM to ChatGPT on tasks that are not heavily supported in the imitation data. We show that these performance discrepancies may slip past human raters because imitation models are adept at mimicking ChatGPT’s style but not its factuality. Overall, we conclude that while model imitation can be useful for training models to follow instructions and avoid toxic outputs, it falls short its full promise in many ways. In particular, there exists a substantial capabilities gap between open and closed LMs that we find cannot be bridged merely by adding more imitation data. Instead, we find that fine-tuning more capable base LMs has a significantly more substantial effect on closing this gap. In turn, we argue that the higher leverage action for improving open-source models is to tackle the difficult challenge of developing better base LMs, rather than taking the shortcut of imitating proprietary systems. 1 INTRODUCTION The recent release of powerful language models (LMs) such as ChatGPT (OpenAI, 2022), Bard (Pichai, 2023), and Claude (AnthropicAI, 2023) might herald a future where the best AI systems are provided primarily as a fee-based API by large companies. At the same time, open-source LMs are becoming increasingly accurate, with models like LLaMA (Touvron et al., 2023) and FLAN-T5 (Chung et al., 2022) providing many of the same basic capabilities as their commercial counterparts, albeit at a lower level of performance (Touvron et al., 2023; Chung et al., 2022). This presents an important question, whose answer will have profound future implications: will the most powerful LMs be closed-source or will they be freely distributed for anyone to use, modify, and extend? Both possibilities have important pros and cons, and implications on policy, corporate strategy, and the future of scientific inquiry. In this work, we study one possible resolution to this question: model imitation (Wallace et al., 2020; Orekondy et al., 2019). The premise of model imitation is that once a proprietary LM is made available via API, one can collect a dataset of API outputs and use it to fine-tune an open-source LM. In theory, this imitation process may provide an easy method to distill (Hinton et al., 2014) the capabilities of any proprietary model, thus implying that open-source LMs will always be competitive with their commercial counterparts. To date, recent works have looked to imitate OpenAI’s best systems, e.g., Self-Instruct (Wang et al., 2023) and Alpaca (Taori et al., 2023), and initial results suggest that these models have achieved near parity with proprietary models. Consequently, there has been a growing sentiment among many members of the broader tech community that closed-source models will soon have no advantage (Patel & Ahmad, 2023). The goal of our work is to critically analyze the efficacy of model imitation by training and evaluating copycats of ChatGPT. We first collect datasets that focus on either imitating ChatGPT for a specific Figure 1: Crowdworkers initially rate the quality of our imitation models highly, as ~70% of their outputs are rated as equal or better than those of ChatGPT (left). However, as we train on more imitation data, our models fail to further close the gap, and even begin to regress along other axes, e.g., factual knowledge according to Natural Questions (center). Our main conclusion is that the biggest limitation of current open-source LMs is their weaker base capabilities. In turn, the best way for the open-source community to improve models is by increasing these capabilities (e.g., via scaling, better pretraining data, etc.) rather than fine-tuning on more and more imitation data (right). We then fine-tune LMs on these datasets using a range of model sizes (1.5B–13B), base models (GPT-2 and LLaMA), and data amounts (0.3M–150M tokens). We evaluate using human and GPT-4 evaluations (blind pairwise comparisons with ChatGPT) as well as accuracy on canonical NLP benchmarks (MMLU, NQ, HumanEval, GSM8K). We were initially surprised by how much imitation models improve over their base models: they are far better at following instructions, and their outputs appear similar to ChatGPT’s. This was further supported by both human and GPT-4 evaluations, where the outputs of our best imitation model were rated as competitive with ChatGPT (e.g., Figure 1, left). However, when conducting more targeted automatic evaluations, we found that the imitation models close little to none of the large gap between LLaMA and ChatGPT. In particular, we demonstrate that imitation models improve on evaluation tasks that are heavily supported in the imitation training data. On the other hand, the models do not improve (or even decline in accuracy) on evaluation datasets for which there is little support. For example, training on 100k ChatGPT outputs from broad-coverage user inputs provides no benefits to Natural Questions accuracy (e.g., Figure 1, center), but training exclusively on ChatGPT responses for Natural-Questions-like queries drastically improves task accuracy. Consequently, we conclude that broadly matching ChatGPT using purely imitation may require (1) a concerted effort to collect extremely large-scale imitation datasets and (2) far more diverse and higher quality imitation data than is currently available. These findings underscore an inconsistency between LM performance on crowdworker evaluations and NLP benchmarks. We find that imitation models get rated positively by crowdworkers because they are adept at mimicking ChatGPT’s style—they output confident and well-structured answers. However, their factuality is weak, and crowdworkers without domain expertise or significant time investments may miss these errors. Overall, our key takeaway is that model imitation is not a free lunch: there exists a capabilities gap between today’s open-source LMs and their closed-source counterparts that cannot be closed by cheaply fine-tuning on imitation data. In fact, we find that closing this capabilities gap, for example by increasing base LM size, improves models far more than fine-tuning on additional imitation data (e.g., Figure 1, right). This implies that the higher leverage action for improving open-source LMs is to tackle the difficult challenge of developing better base models (e.g., by scaling up models, improving pre-training data quality, improving pre-training, etc.), rather than taking the shortcut of imitating proprietary systems. Nevertheless, we believe that model imitation has utility in subverting the need to annotate high-quality finetuning data if one has a sufficiently strong base LM. 2 What is Model Imitation? Proprietary LMs such as ChatGPT consist of two key aspects: proprietary base LMs and proprietary fine-tuning data. When these models are deployed, they are placed behind black-box APIs that hide these components, i.e., users can query the API with arbitrary inputs but cannot see the model’s Query: How does actor critic improve over REINFORCE? ChatGPT Response Actor-critic algorithms are an extension of the REINFORCE algorithm that combine both policy-based and value-based methods. Some of the changes that actor-critic algorithms make over standard REINFORCE: 1. Function Approximation: Actor-critic algorithms typically use function approximation methods to represent the value and policy functions. This allows them to learn in high-dimensional state spaces. 2. Critic Network: In actor-critic algorithms, there is an additional critic network that estimates the value function. The critic provides an estimate of the value of a state, which is then used to update the policy. This reduces the variance of the update and stabilizes learning. 3. Advantage Estimation: Actor-critic algorithms use the advantage function to estimate the quality of an action taken by the policy. The advantage function is the difference between the return of an action and the estimated value of a state. This reduces the variance of the update and allows for more efficient learning. Overall, actor-critic algorithms provide a more efficient and stable way of learning in reinforcement learning tasks, particularly in high-dimensional state spaces. Imitation Model Response Actor-critic algorithms are a type of reinforcement learning algorithm that improves the REINFORCE algorithm by combining the actor (policy) and critic (value) components. The main changes that actor-critic algorithms make over standard REINFORCE are: 1. Actor-critic algorithms use a single critic to evaluate the value function of the policy, whereas standard REINFORCE uses a separate critic for each action. 2. Actor-critic algorithms use a single actor to determine the actions to take, whereas standard REINFORCE uses a separate actor for each action. 3. Actor-critic algorithms learn the policy and actor simultaneously, whereas standard REINFORCE learns the policy and actor separately. 4. Actor-critic algorithms use an optimistic estimate of the value function to determine the actor’s actions, whereas standard REINFORCE uses the critic’s estimate. Overall, actor-critic algorithms combine the critic’s and actor’s estimates in order to learn both the policy and actor simultaneously, while standard REINFORCE learns them separately. This can lead to faster convergence and potentially better generalization, but the choice between the two algorithms depends on the specific problem and the desired behavior of the agent. Figure 2: ChatGPT and our best imitation model produce answers with similar style—they start with an overview paragraph, a list of differences, and end with a summary. However, while ChatGPT’s answer is mostly correct, the imitation model’s answer is completely inaccurate despite sounding authoritative. We show correct sentences in green, ambiguously-correct sentences in yellow, and incorrect ones in red. training data, next-token probabilities, and architecture. In model imitation, the goal is to collect data using the API to train an LM that achieves comparable performance to it, i.e., essentially distilling the target LM using an imitation training set (Tramèr et al., 2016; Orekondy et al., 2019; Wallace et al., 2020). Potential reasons for performing imitation range from benign to illegal: - Academics can use powerful imitation LMs to drive new research projects. - Companies can use imitation LMs to launch services that compete with the proprietary system. - Malicious users could use imitation models to accelerate progress on nefarious use cases. Local versus Broad Imitation When performing model imitation, one will either look to perform local “task-specific” imitation or more global “broad-coverage” imitation. The former imitates the target model on just a specific task or domain, e.g., sentiment analysis of tweets or question answering over Wikipedia entities. The latter focuses on the more ambitious goal of broadly imitating the target model across its full spectrum of behaviors, domains, and tasks. Broad-coverage imitation is challenging because (1) one must collect an extremely diverse imitation dataset and (2) imitation models must capture this wide data distribution and generalize similarly to the target model on a myriad of held-out examples. Recent Work on Model Imitation A surge of recent publications have attempted to both locally imitate proprietary models for specific tasks (Sun et al., 2023; Hsieh et al., 2023; Honovich et al., 2022) and broadly imitate models, e.g., Alpaca (Taori et al., 2023), Vicuna (Chiang et al., 2023), Koala (Geng et al., 2023), GPT4ALL (Anand et al., 2023), and more (Wang et al., 2023; Peng et al., 2023). Many these works conclude that their imitation models achieve near parity with the target model, e.g., Vicuna claims to achieve 90% of the quality of ChatGPT and Google Bard. These claims have since been propagated out into the broader tech community, leading many to believe that open-source LMs are rapidly closing the gap to their closed-source counterparts and that top AI companies will soon have no competitive advantage (Patel & Ahmad, 2023). Our goal. The goal of our paper is to critically evaluate this line of reasoning. In particular, we train models to imitate ChatGPT while experimenting with different decisions (e.g., data collection strategies, data amounts, and base LMs) and conducting rigorous automatic and human evaluations. 3 Building Imitation Datasets We consider both task-specific and broad-coverage imitation. For either form of model imitation, one must curate a set of inputs to query to the target model. In practice, one may have a set of inputs in mind (e.g., sentences from Wikipedia, tweets about Coca-Cola) and if this set of input examples is sufficiently large, one can use them to query the target model and build an imitation dataset. In cases when it is impractical or labor intensive to create a large and diverse pool of inputs, one can also create synthetic examples by prompting LMs to iteratively generate examples that are from the same distribution as an initial smaller seed set of inputs (Wang et al., 2023; Honovich et al., 2022). Task-specific imitation For task-specific imitation, we focus on question answering and abstractive text summarization. We describe both of these below with additional details in Appendix A: • NQ-synthetic: For question answering, we created an imitation dataset tailored to Natural Questions (Kwiatkowski et al., 2019a), i.e., factual knowledge about Wikipedia entities. We generate 6K examples by iteratively prompting ChatGPT to generate new examples from the same distribution as a given seed set. • TLDR-Synthetic: For summarization, we use generate ChatGPT summaries for a set of 200k passages from the tl;dr summarization dataset (Völske et al., 2017). For evaluation, we follow the procedure in (Stiennon et al., 2022), and report ROUGE-1 score on the CNN/Daily Mail news summarization (Chen et al., 2016) test set (see Appendix D for additional evaluations). Broad-coverage imitation For the more ambitious goal of broad-coverage imitation, we leverage the fact that models such as ChatGPT have become so popular that their inputs and outputs are already widely posted on the web. Thus, we can collect a large, diverse, and generally high-quality dataset of examples for free without ever having to interact with the company’s API. In particular, we collect examples from three sources: • ShareGPT: we use approximately 90K dialogues shared by users on the website ShareGPT. To maintain data quality, we deduplicated on the query level and removed any non-English conversations using a language detector. This leaves approximately 50K examples, each of which consist of multiple turns of dialogue. • HC3 (Guo et al., 2023): we use the ChatGPT responses from the English Human-ChatGPT Comparison Corpus. This contains ~27K ChatGPT responses for ~24K questions. • Discord ChatGPT Bots: we use 10k input-output examples collected from the r/ChatGPT and Turing Al Discord servers, two public channels that allow users to interact with ChatGPT bots. We refer to this dataset as ShareGPT-Mix and show qualitative examples in Appendix A. We find that ShareGPT-Mix is generally of high quality. First, there is high diversity in the instructions: for each user query in the dataset, the most similar other user query has an average BLEU score similarity of just 8%. This is considerably lower than that of other datasets such as Super-NaturalInstructions (Wang et al., 2022), which is at 61% BLEU similarity for a similarly sized set of examples. We also manually reviewed different examples and logged their semantic category (see Table 6 in Appendix A). The dataset contains diverse categories, including many multi-lingual conversations and coding tasks. 4 Main Results We train imitation LMs using our ShareGPT-Mix and NQ-synthetic datasets, and we conduct both human and automatic evaluations. We focus our initial results on the ShareGPT-Mix models. 4.1 Training and Evaluation Setup We study how model imitation improves as we increase the amount of imitation data and vary the capabilities of the underlying base LM. We consider decoder-only models ranging in size from 1.5B Figure 3: We find that GPT-4 and crowdworker evaluations show the same trends. As we scale up the amount of imitation data, GPT-4’s ratings of our imitation models are relatively flat (left). However, as we scale up the base model size, GPT-4’s rates the quality of our imitation models increasingly highly (right). to 13B parameters: GPT-2 1.5B (Radford et al., 2019), LLaMA 7B (Touvron et al., 2023), and LLaMA 13B.\footnote{We use model scale as a proxy for base-model quality, however model quality could also improved by other factors such as the quality of pre-training data, architectural improvements, novel pre-training methods, etc.} We also study the effect by data scale by fine-tuning with different sized data subsets. During training, we chunk the conversations into 2048 tokens blocks. We introduce special tokens that demarcate the beginning of each user query and model output. We fine-tune using standard LM losses on only the model outputs. Following Chowdhery et al. (2022); Chung et al. (2022), we train for one epoch using the AdamW optimizer with gradients re-scaled by the magnitude of each weight. We use a learning rate of $2e^{-3}$ with 1000 steps of linear warm-up from 0, and we train with batch size 32. All models are trained in JAX using a combination of fully shared data parallelism and tensor parallelism on TPUs hosted by Google Cloud or on a single Nvidia DGX server with 8 A100 GPUs. For automatic evaluations, we measure performance on 5-shot MMLU (Hendrycks et al., 2021), 3-shot Natural Questions (Kwiatkowski et al., 2019b), 0-shot HumanEval (Chen et al., 2021b), and 6-shot chain-of-thought GSM8K (Cobbe et al., 2021). We report the original scoring metrics associated with each dataset (e.g., exact match for NQ). For human evaluation, we conduct blind pairwise output comparisons using Mechanical Turk. In our UI, we present each rater with a task instruction and the output of two unknown models, one of which is ChatGPT and the other is one of our imitation models (see Figure 7 in Appendix B). The raters select which output they prefer or if the two outputs are equal in quality. We use approximately 70 crowd workers and evaluate on 255 held-out prompts.\footnote{To mitigate any test-set leakage, we filtered out queries with a BLEU score greater than 20% with any example from our training set. We also removed non-English and coding-related prompts, as these cannot be reliably reviewed by crowd workers. We pay the evaluators roughly $15/hour based on the average time it takes to complete a task. We select workers with $\geq 95\%$ approval rating, are located in an English-speaking country, and have at least 100 HITs completed.} We report the average preference across the dataset and one standard deviation around the mean. Additionally, we conduct evaluations using GPT-4 and present additional details of the prompts used in Appendix C. We will release all of our training code, pre-trained models, and human evaluation test-set.\footnote{Training codebase available at https://github.com/young-geng/EasyLM, test-set available at https://github.com/arnav-gudibande/koala-test-set, and models available at https://huggingface.co/young-geng/koala.} 4.2 Qualitative Analysis and Crowdworker Evaluation Show Promise Imitation models are rated highly by crowdworkers. We were initially surprised at the quality of our ShareGPT-mix models: while the base GPT-2 or LLaMA models often fail to follow instructions, the imitation models produce outputs that stay on task. These initial promises were further supported, Figure 4: Automatic evaluations. As we increase the amount of imitation data, there is little improvement on various benchmarks, or even performance regressions (top). On the other hand, scaling up the base LM steadily improves results (bottom), suggesting that the key difference between open-source and closed-source LMs is a raw capabilities gap, rather than the finetuning data used. as crowdworkers and GPT-4 often rated the quality of the imitation models’ outputs as equal or better than those of ChatGPT, especially as we scale up model size (right of Figure 1 and 3). However, we also find that human ratings quickly saturate as we scale up the amount of imitation data (left of Figure 1 and 3), alluding to possible shortcomings of this approach. 4.3 Targeted Automatic Evaluations Expose Failure Modes Broad-coverage imitation models fail to close the gap across most tasks. We next ran targeted automatic evaluations to isolate whether specific model capabilities improved after imitation. We found that across every benchmark that we measured, ShareGPT-mix imitation models do not improve (or even decline) in accuracy as compared to the base model, even when adding additional imitation data (Figure 4, top). This shows that imitating ChatGPT on our broad-coverage imitation data does not improve the model across most axes, e.g., factual knowledge, coding, and problem solving. We argue that this occurs because ChatGPT has captured far more knowledge and capabilities from the web as compared to LLaMA. In turn, it is unreasonable to expect that a small amount of imitation data (e.g., 1000x less data than pre-training) would enable one to bridge this gap. Instead, we argue that broadly matching ChatGPT using weaker base LMs such as LLaMA-13B would require a concerted effort to collect an extremely large and diverse imitation dataset that is far closer to the scale of pretraining. It is currently unclear whether such an effort is worth undertaking or feasible. Training local imitation models is far more successful. On the other hand, our model trained to locally imitate ChatGPT using the NQ-synthetic data is far more successful. In particular, the imitation models’ performance improves significantly as compared to the LLaMA base model (see Table 1) and quickly approaches the accuracy of ChatGPT. This demonstrates that it is far more feasible to distill a specific behavior from ChatGPT as opposed to broadly matching its capabilities. A empirical trade-off exists between different evaluation datasets. A curious phenomena is that training on more ShareGPT-Mix data hurts performance as compared to the base model on some of our evaluations (compare the black versus blue lines in Figure 4). We believe that these performance regressions arise from a distribution shift and tension between the conversational-style fine-tuning data and the downstream benchmarks. An open problem is whether these performance regressions can be mitigated using regularization or by mixing in pre-training data during fine-tuning. | Model | Imitation Data | NQ | CNN | |---------|----------------------|-----|-----| | 7B | – | 17 | 22.1| | 7B | ShareGPT-Mix | 10 | 28.7| | 7B | Targeted Imitation | 22 | 29.2| | 13B | – | 20 | 27.3| | 13B | ShareGPT-Mix | 15 | 30.7| | 13B | Targeted Imitation | 27 | 33.6| | ChatGPT | – | 31 | 39.9| Table 1: We train imitation models on broad-coverage data from ShareGPT-Mix or targeted data (NQ-synthetic or TLDR-Synthetic). The broad-coverage models do not improve on zero-shot NQ (or even degrade in performance) and only improve slightly on CNN summarization, demonstrating the limitations of imitating the capabilities of ChatGPT holistically. However, the models trained on targeted data substantially close the gap to ChatGPT on both NQ and CNN summarization, showing that local imitation of a model is far more feasible in practice. **Improving base LMs is the highest leverage action.** Rather than increasing imitation data size, we find that using better base LMs (by increasing base model size) does lead to substantial accuracy improvements (Figure 4, bottom). This aligns with our previous claim: there exists a capabilities gap between today’s open-source LMs and their closed-source counterparts that cannot be closed by cheaply fine-tuning on imitation data. Instead, the best way to improve open-source LMs is to tackle the difficult challenge of developing better base LMs, whether it be via model scaling or other means. ### 4.4 IMITATION MODELS LEARN STYLE, NOT CONTENT Finally, we investigate why there is a strong discrepancy between crowdworker evaluations, where imitation models appear quite strong, and results on NLP benchmarks, where imitation models appear no better than base LMs. We find that imitation models perform well according to human evaluations because they are adept at mimicking ChatGPT’s style—they output fluent, confident, and well-structured answers. In particular, we show in Table 2 that as we add more imitation data, ChatGPT and our imitation models produce outputs with a similar length, similar word choice, similar use of an authoritative tone, and similar low-level structure (e.g., use of lists). However, as shown in our previous automatic evaluations, the imitation models have weak factuality. In other words, imitation models actually embody some of the worst aspects of AI assistants: their answers sound confident but are less factual than ChatGPT. This is perhaps best elucidated in Figure 2, where the imitation model outputs an answer that is similar in style to ChatGPT’s answer but is completely incorrect. **Human evaluation is increasingly hard.** Unfortunately, crowd workers without domain expertise or significant time investments can easily be deceived by stylistic components—answers that sound confident and correct are often spuriously chosen more often. To improve human evaluation, it is thus increasingly necessary to both engage domain experts, but also to curate a set of highly difficult prompts that can rigorously test different models’ capabilities. Surprisingly, our GPT-4 evaluations also showed the same trends as our crowdworker evaluations (albeit with a slightly larger absolute preference for ChatGPT’s outputs). While this suggests that GPT-4 may be a viable candidate to cheaply emulate human evaluations on some tasks, it also implies that LLMs may replicate some human-like cognitive biases. We look forward to future work that further investigates this possibility. **Imitation models inherit the safety and toxicity style of the teacher model.** Finally, despite imitation only providing benefits in mimicking the “style” or “persona” of the target model, there is | Metric | LLaMA | 20M | 80M | 150M | ChatGPT #2 | |--------------------------------------------|-------|-----|-----|------|------------| | If ChatGPT outputs a list, do we? | 13% | 50% | 67% | 81% | 83% | | If ChatGPT outputs a summary paragraph, do we? | 2% | 40% | 42% | 48% | 55% | | Unigram intersection w/ ChatGPT’s output | 19.5 | 40.4| 41.9| 42.5 | 49.2 | | Pearson correlation in length w/ ChatGPT’s output | -0.11 | 0.51| 0.62| 0.62 | 0.67 | | Outputs are in authoritative tone according to GPT-4 | 57% | 99% | 98% | 98% | 98% | Table 2: As we add more imitation data, the style of our models’ outputs are increasingly similar to those of ChatGPT. In particular, we generate outputs from our imitation models and compare them to a random ChatGPT response across different metrics. We also report a rough “upper bound” by comparing a second random ChatGPT output to the original ChatGPT response (ChatGPT #2). still value in doing so. For example, OpenAI has carefully and deliberately trained ChatGPT to be “harmless” to end users, often avoiding toxic outputs and refusing to respond to questionable user requests. We find that our imitation models also inherit these components. In particular, we show in Figure 5 that as we finetune on more imitation data, the imitation model’s outputs become less toxic on RealToxicityPrompts (Gehman et al., 2020), as the model learns to abstain in a similar fashion to ChatGPT. Consequently, we conclude that model imitation is highly effective in cases when one has a powerful base LM and is looking to subvert the need to annotate expensive finetuning data. 5 DISCUSSION Finetuning as a simple knowledge extractor. Our results show that a modest amount of finetuning provides little to no improvements on an LM’s knowledge or capabilities. We thus agree with the view that pre-training is the main source of an LM’s capabilities, and that finetuning acts as a lightweight method to train the model to extract its own knowledge Schulman (2023). This is the reason why improving models by imitating ChatGPT on a small set of data is insufficient, as the base knowledge is largely unaffected. Furthermore, this view suggests that during finetuning time, you may even want to avoid introducing new knowledge (i.e., do not imitate better models), as you will otherwise be training the model to guess or hallucinate its answers, rather than actually doing the task as intended (Gao, 2021; Goldberg, 2023; Schulman, 2023). Should you be worried about imitation? Imitating proprietary LMs comes with many potential implications for small and large companies alike. Our results suggest that the efficacy of model imitation is limited when there is a large gap between the base and target LM. Thus, we believe that companies who can establish a capabilities gap using large amounts of data, compute, or algorithmic advances are the ones who are best positioned to build and maintain competitive advantages. On the other hand, companies that look to build moats by using off-the-shelf LMs with proprietary fine-tuning datasets may be comparatively more vulnerable to imitation. Potential confounders to our findings. While we believe our findings are well supported, there are a few potential hidden confounders that could change our conclusions. First, as we are unaware of the pre-training data used by ChatGPT, it is possible that some of the tasks that we evaluate on could have been contaminated into ChatGPT’s training data, thus inflating its accuracy numbers. Moreover, to conduct imitation, we perform supervised learning on the outputs from the target model. However, it also may be possible to use the target model to perform RLHF or constitutional AI (Christiano et al., 2017; OpenAI, 2022; Bai et al., 2022) to further improve results. Lastly, we only considered relatively simple methods for collecting imitation data, however, there may be more advanced methods (e.g., active learning) that may improve the effectiveness or efficiency of model imitation. Implications for other forms of model imitation There has been a flurry of recent work that performs model imitation in more indirect ways than we study here. For example, the training process of many recent vision-language model (Li et al., 2022; Liu et al., 2023; Ye et al., 2023; Zhu et al., 2023) includes ChatGPT or GPT-4 outputs at some stages. Furthermore, it has become common to use large LMs in various ways during the data annotation and creation process, e.g., to aid crowd workers, to perform data augmentation, to identify mislabeled data, and more. Our findings may have implications for these approaches, e.g., it is likely that vision-language models that include OpenAI data may have similar failure modes to the ones described in our work. **Technical limitations of model imitation** Imitating proprietary models also has various technical limitations: the models inherit the weaknesses and biases of proprietary models, imitation does not allow one to directly improve on the design decisions of closed AI companies (e.g., data annotation strategies), and these systems are roughly upper-bounded by the capabilities of the target proprietary model. Moreover, it is difficult to answer certain scientific questions using imitation models because they include proprietary black-box models in their training pipeline. 6 RELATED WORK **Model distillation** Model imitation is similar to model distillation (Hinton et al., 2014), where one trains a student model to imitate a teacher. While conceptually similar, there are several major practical differences. For distillation, the training data, model architecture, and hyperparameters are known for the teacher. In model imitation, one tries to imitate the teacher without this knowledge. Moreover, for distillation it is common to use training objectives that utilize the probability distribution of the teacher whereas in stealing such a distribution is typically unavailable. **Past work on model imitation** Prior work has shown that model imitation is possible for various domains (Lowd & Meek, 2005; Tramèr et al., 2016; Orekondy et al., 2019), including language classifiers (Krishna et al., 2020; Pal et al., 2019) and machine translation systems (Wallace et al., 2020). Nevertheless, past work considers a setting where models are trained from scratch, and thus the main proprietary nature of a model is the company’s internal training data. In our setting, systems like ChatGPT are proprietary because they also leverage OpenAI’s internal pre-trained LMs that are stronger than any available open-source LM. **Defending against model imitation** Our results show that imitation is a moderate concern for companies. In turn, there is a need to develop methods to mitigate or detect imitation. There is an existing body of work in this direction, e.g., one can detect whether a particular model is trained via imitation (Juuti et al., 2019; Szyller et al., 2019; Krishna et al., 2020; Maini et al., 2021) or slow model stealing by sacrificing some performance (Orekondy et al., 2020; Dziedzic et al., 2022a; Wallace et al., 2020; Dziedzic et al., 2022b). Unfortunately, existing methods often exhibit too severe of a tradeoff to be deployable in practice. 7 CONCLUSION AND FUTURE WORK In this work, we critically analyzed the efficacy of model imitation. We showed that imitation can indeed improve the style, persona, and instruction adherence of open-source LMs. However, imitation falls short in improving LMs across more challenging axes such as factuality, coding, and problem solving. On one hand, these results indicate that businesses can successfully establish and safeguard a competitive advantage by pre-training powerful base models. Conversely, it also implies that if two groups possess equally competent base LMs, one can easily mimic the persona and behavior of the other model, without needing to annotate expensive fine-tuning data. Moving forward, our findings raise a range of technical and societal questions. First, we show that existing crowd worker evaluations have trouble elucidating the differences between imitation models and proprietary ones, despite clear differences existing between them. In turn, the future of human evaluation remains unclear: how can we cheaply and quickly probe the utility of a powerful LLM? Second, given the large gap between LLaMA and ChatGPT (the latter model is faster, cheaper, and more accurate), and the insufficiencies of model imitation, there are obvious open questions on how to best improve open-source LMs (e.g., increasing model scale, improving pre-training data quality, developing new pretraining methods, etc). Finally, our work raises ethical and legal questions, including whether the open-source community should continue to advance progress by directly imitating company products, as well as what countermeasures companies can take to protect and license their intellectual property. In future work, we hope to delve deeper into these issues and devise better methods for the ethical and responsible deployment of LMs. ACKNOWLEDGEMENTS We thank Nicholas Carlini, the members of Berkeley NLP, and the members of Berkeley RAIL for valuable feedback on this project. Eric Wallace is supported by the Apple Scholars in AI/ML Fellowship. Part of this research was supported with Cloud TPUs from Google’s TPU Research Cloud (TRC). REFERENCES Yuvanesh Anand, Zach Nussbaum, Brandon Duderstadt, Benjamin Schmidt, and Andriy Mulyar. GPT4All: Training an assistant-style chatbot with large scale data distillation from GPT-3.5-Turbo, 2023. AnthropicAI. Introducing claude, 2023. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional AI: Harmlessness from AI feedback. arXiv preprint arXiv:2212.08073, 2022. Danqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the cnn/daily mail reading comprehension task. 2016. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code. 2021a. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021b. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing GPT-4 with 90%* ChatGPT quality, 2023. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, et al. PaLM: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. NIPS, 2017. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Łukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems, 2021. Adam Dziedzic, Nikita Dhawan, Muhammad Ahmad Kaleem, Jonas Guan, and Nicolas Papernot. On the difficulty of defending self-supervised learning against model extraction. In ICLR, 2022a. Adam Dziedzic, Muhammad Ahmad Kaleem, Yu Shen Lu, and Nicolas Papernot. Increasing the cost of model extraction with calibrated proof of work. In ICLR, 2022b.
dBO8ZPQMVF
Can you help me understand better the relationship between MAS and a standard diffusion model? Can MDM be seen as the forward process, why the 3D consistency check as the reverse process? Or is this incorrect?
MAS: Multi-view Ancestral Sampling for 3D Motion Generation Using 2D Diffusion Anonymous authors Paper under double-blind review Abstract We introduce Multi-view Ancestral Sampling (MAS), a method for generating consistent multi-view 2D samples of a motion sequence, enabling the creation of its corresponding 3D counterpart. While abundant 2D samples are readily available, such as those found in videos, 3D data collection is involved and expensive, often requiring specialized motion-capture systems. MAS leverages diffusion models trained solely on 2D data to produce coherent and realistic 3D motions. This is achieved by simultaneously applying multiple ancestral samplings to de-noise multiple 2D sequences representing the same motion from different angles. Our consistency block ensures 3D consistency at each diffusion step by combining the individual generations into a unified 3D sequence, and projecting it back to the original views for the next iteration. We evaluate MAS using 2D pose data from intricate and unique motions, including professional basketball maneuvers, rhythmic gymnastic performances featuring ball apparatus interactions, and horse obstacle course races. In each of these domains, MAS generates diverse, high-quality, and unprecedented 3D sequences that would otherwise require expensive equipment and intensive human labor to obtain. 1 Introduction 3D motion generation is an increasingly popular field that has important applications in computer-animated films, video games, virtual reality, and more. One of the main bottlenecks of current approaches is reliance on 3D data, which is typically acquired by actors in motion capture studios or created by professional animation artists. Both forms of data acquisition are not scalable, do not capture in-the-wild behavior, and leave entire motion domains under-explored. Nevertheless, the ubiquity of video cameras leads to countless high-quality recordings of a wide variety of motions. A possible way to leverage these videos is extracting 3D pose estimations and using them as training data. Yet, the innate ambiguities of monocular 3D pose estimation such as self-occlusions and blurriness of quick motions often lead to infeasible poses and temporal inconsistencies which make the quality of the prediction unsuitable for motion synthesis. Recently, Azadi et al. (2023) and Zhang et al. (2023) incorporated 3D motions estimated from images or videos into motion synthesis applications. The former used them to enrich an existing motion capture dataset and the latter used them as reference motions while learning a physics-based Reinforcement Learning policy. In both cases, the quality issues were bridged using strong priors (either high-quality 3D data or physical simulation), hence remaining limited to the bounds dictated by them. In this paper, we present Multi-view Ancestral Sampling (MAS), a novel method for utilizing a diffusion model trained on in-the-wild 2D motions to generate 3D motions, including challenging and diverse settings. MAS samples a 3D motion by simultaneously denoising multiple 2D views describing it. At each diffusion denoising step, all views are triangulated into one consistent 3D motion and then projected back to each view. This way we maintain multi-view consistency throughout the denoising process. To further encourage multi-view consistency, we use 3D noise that is projected to each view during sampling. 1 Please watch our supplementary offline web page to see the animated results. Our code, together with the newly extracted datasets, will be made available upon publication. Figure 1: Multi-view Ancestral Sampling (MAS) is using a 2D motion diffusion model to generate novel high-quality 3D motions. This technique enables learning intricate motions from monocular data only. We show that MAS can sample diverse and high-quality motions, using a 2D diffusion model that was exclusively trained on motions obtained from in-the-wild videos. Furthermore, relying on ancestral sampling allows MAS to generate a 3D motion in a few seconds only, using a single standard GPU. MAS excels in scenarios where acquiring 3D motion capture data is impractical while video footage is abundant (See Figure 1). In such settings, we apply off-the-shelf 2D pose estimators to extract 2D motion from video frames, which are then used to train our diffusion prior. We demonstrate MAS in three domains: (1) professional basketball player motions extracted from common NBA match recordings, (2) horse motions extracted from equestrian contests, and (3) human-ball interactions extracted from rhythmic ball gymnastics performances. These datasets demonstrate motion domains that were previously under-explored due to 3D data scarcity. Our code, together with the newly extracted datasets will be made available upon publication. 2 RELATED WORK 3D Motion Synthesis. Multiple works explore 3D motion generation using moderate-scale 3D motion datasets such as HumanML3D [Guo et al., 2022], KIT-ML [Plappert et al., 2016] and HumanAct12 [Guo et al., 2020]. With this data, synthesis tasks were traditionally learned using Auto-Encoders or VAEs [Kingma & Welling, 2013], [Holden et al., 2016], [Ahuja & Morency, 2019], [Petropovich et al., 2022], [Guo et al., 2022], [Tevet et al., 2022]. Recently, Denoising Diffusion Models [Sohl-Dickstein et al., 2015], [Song & Ermon, 2020] were introduced to this domain by MDM [Tevet et al., 2023], MotionDiffuse [Zhang et al., 2022a], MoFusion [Dabral et al., 2023], and FLAME [Kim et al., 2022]. Diffusion models were proven to have a better capacity to model the motion distribution of the data and provided opportunities for new generative tasks. Yet the main limitation of all the above methods is their reliance on high-quality 3D motion capture datasets, which are hard to obtain and limited in domain and scale. In this context, SinMDM [Raab et al., 2023] enabled non-humanoid motion learning from a single animation; PriorMDM [Shafir et al., 2023] and GMD [Karunratanakul et al., 2023] presented fine-tuning and inference time applications for motion tasks with few to none training samples, relying on a pre-trained MDM. Monocular Pose Estimation. Monocular 3D pose estimation is a well-explored field [Kocabas et al., 2020], [Shetty et al., 2023], [Yu et al., 2023], [Shan et al., 2023]. Its main challenge is the many ambiguities (e.g., self-occlusions and blurry motion) inherent to the problem. A parallel line of work is pose lifting from 2D to 3D. MotionBERT [Zhu et al., 2023] demonstrates a supervised approach to the task. Some works offer to only use 2D data and learn in an unsupervised manner; [Drover et al., 2018] suggest training a 2D discriminator to distinguish between random projections of outputs of a 3D lifting network and the 2D data while optimizing the lifting network to deceive the discriminator; ElePose [Wandt et al., 2021] train a normalizing-flows model on 2D poses and then use it to guide a 3D lifting network to generate 3D poses that upon projection have high probability w.r.t the normalizing-flows model. They add self-consistency and geometric losses and also predict the elevation angle of the 2D pose which is crucial for their success. Animal 3D Shape Reconstruction. The recent MagicPony [Wu et al., 2023] estimates the pose of an animal given a single image by learning a per-category 3D shape template and per-instance skeleton articulations, trained to reconstruct a set of 2D images upon rendering. Yao et al. (2023) suggest a method for improving the input images with occlusions/truncation via 2D diffusion. Then, they use a text-to-image diffusion model to guide 3D optimization process to obtain shapes and textures that are faithful to the input images. **Text to 3D Scene Generation.** DreamFusion (Poole et al., 2022) and SJC (Wang et al., 2022), introduced guidance of 3D content creation using diffusion models trained on 2D data. Poole et al. (2022) suggest SDS, a method for sampling from the diffusion model by minimizing a loss term that represents the distance between the model’s distribution and the noised sample distribution. They suggest to harness SDS for 3D generation by repeatedly rendering a 3D representation (mostly NeRF (Mildenhall et al., 2020) based) through a differentiable renderer, noising the resulting images using the forward diffusion, get a correction direction using the diffusion model, and then back-propagate gradients to update the 3D representation according to the predicted corrections. Although promising, their results are of relatively low quality and diversity and suffer from slow inference speed, overly saturated colors, lack of 3D consistency, and heavy reliance on text conditioning. Follow-up works elaborate on the concept and suggest methods for improvement. Magic3D (Lin et al., 2023) adopt coarse-to-fine optimization strategy and improve design choices; Fantasia3D (Chen et al., 2023) suggest starting by optimizing a geometric representation using normal maps rendered from it and fed into a text-to-image diffusion model, and then optimize the surface material; ProlificDreamer (Wang et al., 2023c) suggest modeling the 3D scene as a random variable, and using a particle-based variational inference approach, which enables the generation of diverse scenes; HIFA (Zhu & Zhuang, 2023) apply a DDIM (Song et al., 2022) sampling loop inside each optimization iteration, add depth and density priors and improve design choices such as timestep scheduling; DreamTime (Huang et al., 2023) thoroughly explores timestep scheduling and weighting and suggests a monotone timestep schedule and a weight function that is divided into 3 sections - coarse, content, and detailed; Hertz et al. (2023) suggested the Delta Denoising Score to avoid mode-collapse in image editing applications. In a similar context, Instruct-NeRF2NeRF (Haque et al., 2023) edit a NeRF by gradually editing its source multi-view image dataset during training, using an image diffusion model. ### 3 Preliminary **Diffusion Models and Ancestral Sampling.** Diffusion models are generative models that learn to gradually transform a predefined noise distribution into the data distribution. For the sake of simplicity, we consider the source distribution to be Gaussian. The forward diffusion process is defined by taking a data sample and gradually adding noise to it until we get a Gaussian distribution. The diffusion denoising model is then parameterized according to the reverse of this process, i.e. the model will sample a random Gaussian sample and gradually denoise it until getting a valid sample. Formally, the forward process is defined by sampling a data sample \( x_0 \sim q(x_0) \) and for \( t \in 1, ..., T \), sampling \( x_t \sim q(x_t | x_{t-1}) = N(x_t; \sqrt{1 - \beta_t}x_{t-1}, \beta_t I) \), until getting to \( x_T \), which has a gaussian distribution \( x_T \sim q(x_T) = N(x_T; 0, I) \). The reverse process, also called ancestral sampling, is defined by sampling a random gaussian noise \( x_T \sim p_\phi(x_T) = N(x_T; 0, I) \) and then for \( t \in T, t-1, ..., 1 \), sampling \( \hat{x}_{t-1} \sim p_\phi(\hat{x}_{t-1} | x_t) \), until getting to \( \hat{x}_0 \), which should ideally approximate the data distribution. The model posterior \( p_\phi(x_{t-1} | x_t) \) is parameterized by a network \( \mu_\phi(x_t, t) \): \( p_\phi(x_{t-1} | x_t) = q(x_{t-1} | x_t, x_0 = \mu_\phi(x_t; t)) = N(x_{t-1}; \mu_\phi(x_t, t), \sigma_t^2 I) \), i.e. the new network predicts a mean denoising direction from \( x_t \) which is then used for sampling \( x_{t-1} \) from the posterior distribution derived from the forward process. \( \mu_\phi \) is further parameterized by a network \( e_\phi \) that aims to predict the noise embedded in \( x_t \): \[ \mu_\phi(x_t, t) = \frac{1}{\sqrt{\alpha_t}} \left( x_t - \frac{\beta_t}{\sqrt{1 - \alpha_t}} e_\phi(x_t, t) \right) \] Now, when optimizing the usual variational bound on negative log-likelihood, it simplifies to, \[ L(\phi) = E_{t \sim U(0,1), \epsilon \sim N(0,1), x_0 \sim q(x_0)} \left[ w(t) \| e_\phi(\alpha_t x_0 + \sigma_t \epsilon; t) - \epsilon \|_2^2 \right] \] which is used as the training loss. We approximate this loss by sampling \( t, \epsilon, x_0 \) from their corresponding distributions and calculating the loss term. Figure 2: The figure illustrates an overview of MAS, showing a multi-view denoising step from the 2D sample collection $x_{t}^{1:V}$ to $\hat{x}_{t-1}^{1:V}$, corresponding to camera views $v_{1:V}$. Denoising is performed by a pre-trained 2D motion diffusion model $G_{2D}$. At each such iteration, our Consistency Block triangulates the motion predictions $\hat{x}_{0}^{1:V}$ into a single 3D sequence and projects it back onto each view ($\tilde{x}_{0}^{1:V}$). To encourage consistency in the model’s predictions, we sample 3D noise, $\epsilon_{3D}$ and project it to the 2D noise set $\epsilon^{1:V}$ for each view. Finally, we sample $x_{t-1}^{1:V}$ from $q(x_{t-1}^{1:V}|x_{t}^{1:V}, \tilde{x}_{0}^{1:V})$. Data Representation. A motion sequence is defined on top of a character skeleton with $J$ joints. A single character pose is achieved by placing each joint in space. Varying the character pose over time constructs a motion sequence. Hence, we denote a 3D motion sequence, $X \in \mathbb{R}^{L \times J \times 3}$, with $L$ frames by the $xyz$ location of each joint at each frame. Note that this representation is not explicitly force fixed bone length. Instead, our algorithm will do so implicitly. Additionally, This formulation allows us to model additional moving objects in the scene (e.g., a ball or a box) using auxiliary joints to describe their location. Considering the pinhole camera model[^1], we define a camera-view $v = (R_v, \tau_v, f_v)$ by its rotation matrix $R_v \in \mathbb{R}^{3 \times 3}$, translation vector $\tau_v \in \mathbb{R}^3$ and the focal length $f_v$ given in meters. Then, a 2D motion, $x^v = p(X, v) \in \mathbb{R}^{L \times J \times 2}$, from camera-view $v$, is defined as the perspective projection $p$ of $X$ to $v$ such that each joint at each frame is represented with its $uv$ coordinates of the camera space. In order to drive 3D rigged characters (as presented in the figures of this paper) we retrieve 3D joint angles from the predicted 3D joint positions of $X$ using SMPLify[^2] optimization for human characters, and Inverse-Kinematics optimization for the non-humanoid characters (i.e., horses). 4 METHOD Our goal is to generate 3D motion sequences using a diffusion model trained on monocular 2D motions. This would enable 3D motion generation in the absence of high-quality 3D data, by leveraging the ubiquity of monocular videos describing those scenes. To this end, we introduce Multi-view Ancestral Sampling (MAS), a method that simultaneously generates multiple views of a 3D motion via ancestral sampling. MAS maintains consistency between the 2D motions in all views to construct a coherent 3D motion at each denoising step. A single MAS step is illustrated in Figure 2. First, we extract 2D pose estimations from in-the-wild videos and use them to train a 2D diffusion model $\hat{x}_0 = G_{2D}(x_t)$, based on the MDM[^3] architecture that predicts the clean 2D motion, $\hat{x}_0$ at each denoising step (See Figure 3). MAS is then able to sample 3D motions from $G_{2D}$ as follows. MAS simultaneously applies a DDPM ancestral sampling loop on multiple 2D motions, which represent views of the same 3D motion from $V$ different camera angles. At each denoising step $t$, we get a set of noisy views $x_{t}^{1:V}$. [^1]: https://en.wikipedia.org/wiki/3D_projection#Perspective_projection Figure 3: **Preparations.** The motion diffusion model used for MAS is trained on 2D motion estimations of videos scraped from the web. as input and predict clean samples $\hat{x}_{t}^{1:V} = G_{2D}(x_{t}^{1:V})$. Then, the Consistency Block is applied in two steps: (1) Triangulation: find a 3D motion $X$ that follows all views as closely as possible. (2) Reprojection: project the resulting 3D motion to each view, getting $\hat{x}_{t}^{1:V}$ which we can think of as a multiview-consistent version of the predicted denoising direction. Finally, we can sample the next step $x_{t-1}^{1:V}$ from the backward posterior $x_{t-1}^{1:V} \sim q(x_{t-1}|x_t, \hat{x}_0^{1:V})$. Repeating this denoising process up to $t=0$ yields the same motion sequence generated from $V$ views. Those sequences are triangulated to construct a 3D motion which is the output of MAS. This sampling process is detailed in Algorithm 1 (Appendix). The remainder of this section describes the monocular data collection and diffusion pre-training (4.1), followed by a full description of MAS building blocks (4.2). ### 4.1 Preparations **Data Collection.** We collect videos from various sources — NBA videos, horse jumping contests, and rhythmic gymnastics contests. We then apply multi-person and object tracking using off-the-shelf models to extract bounding boxes. Subsequently, we use other off-the-shelf models for 2D pose estimation to get 2D motions. Implementation details are in Section 6. We build on the fact that 2D pose estimation is a well-explored topic, with large-scale datasets that can be easily scaled as manual annotations are much easier to obtain compared to 3D annotation which usually requires a motion capture studio. **2D Diffusion Model Training.** We follow Tevet et al. (2023) and train the unconditioned version of the Motion Diffusion Model (MDM) with a transformer encoder backbone for each of the three datasets separately. We boost the sampling of MDM by a factor of 10 by learning 100 diffusion steps instead of the original 1000. ### 4.2 Multi-view Ancestral Sampling We would like to construct a way to sample a 3D motion using a model that generates 2D samples. First, we observe that a 3D motion is uniquely defined by 2D views of it from multiple angles. Second, we assume that our collected dataset includes a variety of motions, from multiple viewpoints, and deduce that our 2D diffusion model can generalize for generating multiple views of the same 3D motion, for a wide variety of 3D motions. Thus, we aim to generate multiple 2D motions that represent multiple views of the same 3D motion, from a set of predefined view-points. **Ancestral Sampling for 3D generation.** As described in Section 3, diffusion models are designed to be sampled using gradual denoising, following the ancestral sampling scheme. Hence, we design MAS to generate multiple 2D motions via ancestral sampling, while guiding all views to be multiview-consistent. Formally, we take a set of $V$ views, distributed evenly around the motion, with elevation angle distribution heuristically picked for each dataset. Then, for each view $v$ we initialize $x_T^v$, and for $t = T, ..., 1$ transform $x_t^v$ to $x_{t-1}^v$ until getting a valid 2D motion $x_0^v$ for each view. We choose to generate all views concurrently, keeping all views in the same diffusion timestep throughout the process. In every denoising step we receive $x_t^{1:V} = (x_t^1, ..., x_t^V)$. We derive the clean motion predictions by applying the diffusion model in each view $x_0^v := \frac{x_v - \sqrt{1-\alpha_t}\epsilon_\phi(x_t^v)}{\sqrt{\alpha_t}}$, getting $\hat{x}_0^{1:V} = (\hat{x}_0^1, ..., \hat{x}_0^V)$. We apply our multi-view Consistency Block to find multi-view consistent denoising direction $\tilde{x}_0^{1:V}$ that approximates the predicted motions $\hat{x}_0^{1:V}$. We then use the resulting motions $\tilde{x}_0^{1:V}$ as the denoising direction by sampling $x_{t-1}^v$ from $q(x_{t-1}^v|x_t^v, x_0 = \tilde{x}_0^v)$, and outputting $x_{t-1}^{1:V} = (x_{t-1}^1, ..., x_{t-1}^V)$. MAS can be extended to support dynamic camera-view along sampling instead of fixed ones as detailed in Appendix D. Since this is not empirically helpful for our application, we leave it out of our scope. **Multi-view Consistency Block** As mentioned, the purpose of this block is to transform multiview motions $\hat{x}_i^{1:V}$ into multiview-consistent motions $\tilde{x}_i^{1:V}$ that are as similar as possible. We achieve this by finding a 3D motion $X$ that when projected to all views, it resembles the multiview motions $\hat{x}_i^{1:V}$ via Triangulation. We then return projections of $X$ to each view $\tilde{x}_i^{1:V} = (p(X, 1), ..., p(X, V))$, as the multiview-consistent motions. **Triangulation.** We calculate $X$ via optimization to minimize the difference between projections of $X$ to all views and the multiview motion predictions $\hat{x}_i^{1:V}$: $$X = \arg\min_{X'} \|p(X', 1:V) - \hat{x}_i^{1:V}\|_2^2 = \arg\min_{X'} \sum_{v=1}^{V} \|p(X', v) - \hat{x}_i^v\|_2^2$$ For faster convergence, we initialize $X$ with the optimized results from the previous sampling step. This way the process can also be thought of progressively refining $X$ but we wish to emphasize that the focus remains the ancestral sampling in the 2D views. **3D Noise.** In addition to the consistency block, that enforces 3D consistency on the multi-view predictions $\hat{x}_i^{1:V}$, we would like the noise to be consistent between views as well. To this end, we design a new noise sampling mechanism that will (1) keep Gaussian distribution for each view, and (2) maintain 3D consistency between the views. We start by sampling 3D noise $\varepsilon_{3d} \sim \mathcal{N}(0, I)$ ($\varepsilon_{3d} \in \mathbb{R}^{L \times J \times 3}$). Projecting this noise to each view using the perspective projection $p$ will break the Gaussian assumption, hence, we use orthographic projection instead, which preserves Gaussian distribution for each view (see Appendix C.1), and observes $O(1/d)$ error compared to the perspective projection, where $d$ is the distance between the camera and the object (see Appendix C.2). We then use the resulting distribution for sampling the initial noise $x_T$ and when sampling $x_{t-1} \sim q(x_{t-1}|x_t, x_0 = p(X))$. ## 5 Method Discussion In this section, we discuss the properties of MAS, contextualizing it within the landscape of recent advancements in the text-to-3D domain. **Ancestral sampling.** MAS is built upon the ancestral sampling process. This means that the model is used in its intended way over in-domain samples. This is in contrast to SDS-based methods (Poole et al., 2022) which employ a sampling scheme that uses the forward diffusion to noise images rendered from a 3D representation that is only partially optimized. This can lead to out-of-distribution samples, particularly ones in the early timesteps, where the noise is rather large. This phenomenon is also addressed by Wang et al. (2022) and Huang et al. (2023), who suggest heuristics to alleviate the out-of-distribution problem but do not fundamentally solve it. Furthermore, most SDS-based methods sample $x_t$ independently in each iteration, which may lead to a high variance in the correction signal. Contrarily, using ancestral sampling has, by definition, a large correlation between $x_t$ and $x_{t-1}$, which leads to a more stable process and expressive results. Since MAS is sampling-based, it naturally models the diversity of the distribution, while optimization-based methods often experience mode-collapse or divergence, as addressed by Poole et al. (2022). It is worth noting that SDS is a clever design for cases where ancestral sampling cannot be used. **Multi-view stability.** MAS simultaneously samples multiple views that share the same timestep at each denoising step. SDS-based methods typically use a single view in each optimization step, forcing them to make concessions such as small and partial corrections to prevent ruining the 3D object from other views. This also leads to a state where it is unknown which timestep to choose, since only partial denoising steps were applied (also shown by Huang et al. (2023)). MAS avoids such problems since the multiple view denoising steps are applied simultaneously. It allows us to apply full optimization during the triangulation process. Hence, at the end of the $i$'th iteration, the motion follows the model’s distribution at timestep $T - i$. This alleviates the need for timestep scheduling while dramatically decreasing inference time. **3D noise consistency.** MAS usage of a noise distribution projected from 3D noise onto each view, boosts multiview-consistency in the model’s predictions and greatly benefits the quality and diversity. Figure 4: Generated motions by MAS compared to ElePose (2021), MotionBert (2023), and the motion adaptation of DreamFusion (2022). While MotionBert and DreamFusion generated dull motions with limited movement, ElePose generations are jittery and often include invalid poses (Red rectangles). of the generated motions. SDS-based methods sample uncorrelated noise in the different views, and this inconsistency among the views leads to slower convergence or even divergence. 6 EXPERIMENTS In order to demonstrate the merits of our method, we apply MAS on three different 2D motion datasets. Each dataset addresses a different motion aspect that is under-represented in existing 3D motion datasets (See table 1). (1) The NBA players’ performance dataset demonstrates motion generation in domains of human motions that are poorly covered by existing datasets. (2) The horse show-jumping contests dataset shows generation in a domain that has almost no 3D data at all and has a completely different topology. Finally, (3) the rhythmic-ball gymnastics dataset shows that our method opens the possibility to model interactions with dynamic objects. All the datasets, along with our code, will be made available upon publication. 6.1 DATA COLLECTION NBA videos. We collected about $10K$ videos from the NBA online API[^1]. We then applied multi-person tracking using ByteTrack (Zhang et al., 2022b), and AlphaPose (Fang et al., 2022) for 2D human pose estimation (based on the tracking results). We finally processed and filtered the data by centering the people, filtering short motions, crowd motions, and motions of low quality, splitting discontinuous motions (caused typically by tracking errors), mirroring, and applying smoothing interpolations. Horse jumping contests. We collected 3 horse jumping contest videos (around 2-3 hours each) from YouTube.com. We then apply YoloV7 (Wang et al., 2023a) for horse detection and tracking and ViPose (Xu et al., 2022) trained on APT-36K (Yang et al., 2022) for horse pose estimation. The post-processing pipeline was similar to the one described above. Rhythmic-ball gymnastics. We used the Rhythmic Gymnastics Dataset (Zeng et al., 2020) to get 250 videos, about 1.5 minutes long each, of high-standard international competitions of rhythmic gymnastics performance with a ball. We followed the pipeline described for NBA videos to obtain athletes’ motions and also use YoloV7 for detecting bounding boxes of sports balls. We take the [^1]: https://github.com/swar/nba_api Table 1: **2D Datasets.** Details of the three in-the-wild datasets, collected to demonstrate MAS generation capabilities in under-explored motion domains. | Dataset Name | Subject | #Samples | Length Range | Average Length | FPS | |----------------------------|---------------|----------|--------------|----------------|-----| | NBA videos | Humans | ~ 60K | 4s-16s | ~ 6s | 30 | | Horse jumping contests | Horses | ~ 2K | 3s-40s | ~ 7s | 20 | | Rhythmic ball gymnastics | Humans + Ball | ~ 500 | 10s-120s | ~ 81s | 20 | closest ball to the athlete at each frame and add the center of the bounding box as an additional "joint" in the motion representation. All motions are presented as $x \in \mathbb{R}^{L \times J \times 2}$ as was detailed in Section 3, where NBA is using the AlphaPose body model with 16 joint, horses represented according to APT-36K with 17 joints and the gymnastics dataset is represented with the COCO body model (Lin et al., 2015) with 17 joints plus additional joint for the ball. All 2D pose predictions are accompanied by confidence predictions per joint per frame which are used in the diffusion training process. ### 6.2 IMPLEMENTATION DETAILS Our 2D diffusion model is based on MDM (Tevet et al., 2023), composed of a transformer encoder with 6 attention layers of 4 heads and a latent dimension of 512. This backbone supports motions with variable length in both training and sampling, which makes MAS support them as well. To mitigate some of the pose prediction errors, we mask low-confidence joint predictions from the diffusion loss during training. We used an ADAM optimizer with 0.1 lr for training and cosine noise scheduling. We learn 100 diffusion steps instead of 1000 which accelerate MAS 10-fold without compromising the quality of the results. We observe that MAS performs similarly for any $V \geq 3$ and report 5 camera views along all our experiments. The camera views $v_{1:V}$ are fixed through sampling, surrounding the character and sharing the same elevation angle. The azimuth angles evenly spread around $[0, 2\pi]$. Generating a sample with MAS takes less than 10 seconds on a single NVIDIA GeForce RTX 2080 Ti GPU. ### 6.3 EVALUATION Here we explore the quality of the 3D motions generated by our method. Usually, we would compare the generated motions to motions sampled from the dataset. In our case, we do not have 3D data so we must introduce a new way to evaluate the 3D generated motions. For that sake, we rely on the assumption that a 3D motion is of high-quality if and only if all 2D views of it are of high-quality. Consequently, we suggest taking random projections of the 3D motions and comparing them with our 3D data. More specifically, we generate a set of 3D motions, with lengths sampled from the data distribution, then sample a single angle for every motion with yaw drawn from $\mathcal{U}[0, 2\pi]$ and a constant pitch angle fitted for each dataset. We project the 3D motion to the sampled angle using perspective projection, from a constant distance (also fitted for each dataset) and get a set of 2D motions. Finally, we follow common evaluation metrics (Raab et al., 2022; Tevet et al., 2023) used for assessing unconditional generative models: *FID* measures Fréchet inception distance between the tested distribution and the test data distribution; *Diversity* measures the variance of generated motion in latent space; *Precision* measures the portion of the generated data that is covered by the test data; *Recall* measures the portion of the test data distribution that is covered by the measured distribution. Those metrics are calculated in latent space. Hence, we follow the setting suggested by (Guo et al., 2020) and train a VAE-based evaluator for each dataset. This setting become the de-facto standard for motion evaluation (Petrovich et al., 2022; Tevet et al., 2023; Guo et al., 2022; Liang et al., 2023). We evaluate over 1K random samples and repeat the process 10 times to calculate the standard deviation. Table 2 shows that MAS results are comparable to the diffusion model in use, which marks a performance upper bound in 2D. In addition, MAS suffers from a mode-collapse without the 3D noise feature. Lastly, we evaluate a motion adaptation of DreamFusion (Poole et al., 2022) and show that Table 2: We compare MAS to a motion DreamFusion (Poole et al., 2022) adaptation according to the quality of 2D projections of the generated motions. The performance of the pre-trained 2D diffusion model used by both MAS and DreamFusion serves as an upper bound, yet MAS achieves comparable results. Our ablations show that MAS performs best with as few as 5 views (ours), and 3D noise is crucial for convergence. gray marks mode-collapse (Recall < 10%), bold marks best results otherwise. ‘→’ means results are better when the value is closer to the real distribution. | | FID↓ | Diversity → | Precision ↑ | Recall ↑ | |----------------------|------|-------------|-------------|----------| | Ground-Truth | 1.05 ± .02 | 8.97 ± .05 | 0.73 ± .01 | 0.73 ± .01 | | 2D Diffusion Model | 5.23 ± .13 | 9.70 ± .08 | 0.44 ± .02 | 0.78 ± .01 | | MAS (Ours) | 5.38 ± .06 | 9.47 ± .06 | 0.50 ± .01 | 0.60 ± .01 | | with 2 views (120°) | 6.87 ± .14 | 9.99 ± .06 | 0.35 ± .01 | 0.80 ± .01 | | Stochastic | 6.34 ± .12 | 9.67 ± .07 | 0.41 ± .01 | 0.71 ± 6e-03 | | DreamFusion (2022) | 66.38 ± 1.24 | 8.25 ± .16 | 0.33 ± .08 | 0.17 ± .13 | Table 3: Comparison with pose lifting. MAS outperforms state-of-the-art unsupervised lifting methods. Furthermore, lifting methods are failing short when evaluated from the side view \( \mathcal{U} \left( \frac{\pi}{4}, \frac{3\pi}{4} \right) \), while the performance of MAS remains stable ‘→’ means results are better when the value is closer to the real distribution (8.97 for Diversity); bold marks best results. | View Angles | FID↓ | Diversity → | Precision ↑ | Recall ↑ | |----------------------|------|-------------|-------------|----------| | ElePose (2021) | 10.76 ± .46 | 18.28 ± .33 | 9.72 ± .05 | 8.98 ± .06 | | MotionBert (2023) | 30.22 ± .26 | 36.89 ± .40 | 9.57 ± .09 | 8.67 ± .08 | | MAS (Ours) | 5.38 ± .06 | 5.43 ± .11 | 9.47 ± .06 | 9.49 ± .04 | it performs poorly. This adaptation optimizes the 3D motion \( X \) directly through 10K iterations for each sample, each updates \( X \) using SDS gradients of the same diffusion model used for MAS. Table 3 shows that the generated motions by MAS outperform state-of-the-art pose lifting methods, both the supervised MotionBERT (Zhu et al., 2023) and the unsupervised ElePose (Wandt et al., 2021). Although these methods are not generative per se, we consider lifted motions from 2D motions sampled from the training data as generated samples. Since we sample a uniform angle around the generated motions, we often sample angles that are similar to the lifted angle, which was given as an input to the lifters but not to MAS. When evaluating from the side view, \( (\text{angle} \sim \mathcal{U} \left( \frac{\pi}{4}, \frac{3\pi}{4} \right)) \) we see the clear degradation of the lifting methods while MAS keep the same performance. Figure 6 demonstrates the quality of MAS compared to DreamFusion, MotionBERT, and ElePose. 7 CONCLUSIONS, LIMITATIONS AND FUTURE WORK In this paper, we introduced MAS, a generative method designed for 3D motion synthesis. We showed that high-quality 3D motions can be sampled from a diffusion model trained on 2D data only. The essence of our method lies in its utilization of a multiview diffusion ancestral sampling process, where each denoising step contributes to forging a coherent 3D motion sequence. Remarkably, our experiments show that MAS excels with in-the-wild videos, enabling it to produce motions that are otherwise exceedingly challenging to obtain through conventional means. Our method could also be employed in additional domains such as multiperson interactions, other animal motions and with recent developments in tracking of “any” object (Wang et al., 2023b), we wish to push the boundaries of data even further. Our method inherits the limitations of the 2D data it is using and thus cannot naively predict global position, or apply textual control. We leave extending the data acquisition pipeline to support such features to future work. Finally, we hope the insights introduced in this paper can also be utilized in the text-to-3D field and other applications. REFERENCES Chaitanya Ahuja and Louis-Philippe Morency. Language2pose: Natural language grounded pose forecasting. In *2019 International Conference on 3D Vision (3DV)*, pp. 719–728. IEEE, 2019. Samaneh Azadi, Akbar Shah, Thomas Hayes, Devi Parikh, and Sonal Gupta. Make-an-animation: Large-scale text-conditional 3d human motion generation, 2023. Federica Bogo, Angjoo Kanazawa, Christoph Lassner, Peter Gehler, Javier Romero, and Michael J. Black. Keep it SMPL: Automatic estimation of 3D human pose and shape from a single image. In *Computer Vision – ECCV 2016*, Lecture Notes in Computer Science. Springer International Publishing, October 2016. Rui Chen, Yongwei Chen, Ningxin Jiao, and Kui Jia. Fantasia3d: Disentangling geometry and appearance for high-quality text-to-3d content creation, 2023. Rishabh Dabral, Muhammad Hamza Mughal, Vladislav Golyanik, and Christian Theobalt. Mo-fusion: A framework for denoising-diffusion-based motion synthesis. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 9760–9770, 2023. Dylan Drover, Rohith MV, Ching-Hang Chen, Amit Agrawal, Ambrish Tyagi, and Cong Phuoc Huynh. Can 3d pose be learned from 2d projections alone?, 2018. Hao-Shu Fang, Jiefeng Li, Hongyang Tang, Chao Xu, Haoyi Zhu, Yuliang Xiu, Yong-Lu Li, and Cewu Lu. Alphapose: Whole-body regional multi-person pose estimation and tracking in real-time. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2022. Chuan Guo, Xinxin Zuo, Sen Wang, Shihao Zou, Qingyao Sun, Annan Deng, Minglun Gong, and Li Cheng. Action2motion: Conditioned generation of 3d human motions. In *Proceedings of the 28th ACM International Conference on Multimedia*, pp. 2021–2029, 2020. Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, and Li Cheng. Generating diverse and natural 3d human motions from text. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 5152–5161, June 2022. Ayaan Haque, Matthew Tancik, Alexei Efros, Aleksander Holynski, and Angjoo Kanazawa. Instruct-nerf2nerf: Editing 3d scenes with instructions. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, 2023. Amir Hertz, Kfir Aberman, and Daniel Cohen-Or. Delta denoising score. *arXiv preprint arXiv:2304.07090*, 2023. Daniel Holden, Jun Saito, and Taku Komura. A deep learning framework for character motion synthesis and editing. *ACM Transactions on Graphics (TOG)*, 35(4):1–11, 2016. Yukun Huang, Jianan Wang, Yukai Shi, Xianbiao Qi, Zheng-Jun Zha, and Lei Zhang. Dreamtime: An improved optimization strategy for text-to-3d content creation, 2023. Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, and Tao Chen. Motiongpt: Human motion as a foreign language. *arXiv preprint arXiv:2306.14795*, 2023. Korrawe Karunratanakul, Konpat Preechakul, Supasorn Suwajanakorn, and Siyu Tang. Gmd: Controllable human motion synthesis via guided diffusion models. *arXiv preprint arXiv:2305.12577*, 2023. Jihoon Kim, Jiseob Kim, and Sungjoon Choi. Flame: Free-form language-based motion synthesis & editing. *arXiv preprint arXiv:2209.00349*, 2022. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. Muhammed Kocabas, Nikos Athanasiou, and Michael J Black. Vibe: Video inference for human body pose and shape estimation. In *CVPR*, 2020.
TYXtXLYHpR
In the related works, you distinguish your method from shapelet-based methods, stating that these are primarily used for data mining and classification tasks. However, if these shapelet methods are unsupervised (e.g., Karlsson, Isak, Panagiotis Papapetrou, and Henrik Boström.
Towards Transparent Time Series Forecasting Krzysztof Kacprzyk University of Cambridge kk751@cam.ac.uk Tennison Liu University of Cambridge t1522@cam.ac.uk Mihaela van der Schaar University of Cambridge The Alan Turing Institute mv472@cam.ac.uk Abstract Transparent machine learning (ML) models are essential for ensuring interpretability and trustworthiness in decision-making systems, particularly in high-stakes domains such as healthcare, finance, and criminal justice. While transparent machine learning models have been proposed for classification and regression, time series forecasting presents some unique challenges for ensuring transparency. In particular, currently used bottom-up approaches that focus on the values of the time series at specific time points (usually regularly spaced) do not provide a holistic understanding of the entire time series. This limits the applicability of ML in many critical areas. To open up these domains for ML, we propose a top-down framework of bi-level transparency, which involves understanding the higher-level trends and the lower-level properties of the predicted time series. Applying this framework, we develop TIMEVIEW, a transparent ML model for time series forecasting based on static features, complemented with an interactive visualization tool. Through a series of experiments, we demonstrate the efficacy and interpretability of our approach, paving the way for more transparent and reliable applications of ML in various domains. 1 Introduction Why do we need transparent models? eXplainable Artificial Intelligence (XAI) methods are broadly divided into transparent models and post-hoc explanation techniques (Barredo Arrieta et al., 2020). Transparent (also called glass box) models are crucial in many settings involving high-stakes decisions such as healthcare or credit scoring (Rudin, 2019). As these models are interpretable by design, they are by themselves understandable. Such understanding, apart from being mandated by certain regulatory bodies (Goodman & Flaxman, 2017), is needed, for instance, to improve robustness, detect biases, evoke trust, or certify model compliance with legislation (Barredo Arrieta et al., 2020). Many transparent machine learning models that issue static predictions have been proposed. That includes Linear/Logistic Regression, Generalized Additive Models (Hastie & Tibshirani, 1986), and Decision Trees (Hu et al., 2019). By definition, these methods are not directly applicable to time series forecasting—when we want to predict a whole trajectory rather than a single label. Although they can be adapted, they exhibit poor performance (see Section 2). Challenges of time series forecasting: limitations of a bottom-up approach. In contrast to a single-label output (as in classification or regression), understanding the change in the trajectory is more complicated as it is an entire function (described by many values). As time series forecasting remains an under-studied field of XAI (Barredo Arrieta et al., 2020), current techniques usually resolve to a bottom-up approach. This means they focus on the values of the trajectory at individual time points (usually regularly spaced). For instance, the importance scores in saliency methods are calculated for different prediction horizons (Leung et al., 2023). This may be sufficient when we are interested in a particular time point (e.g., 5-year survival rate), but we often want to comprehend the whole trajectory at once. For instance, when administering a drug, we may be less interested in the concentration of the drug every few hours but rather in understanding the entire curve, including properties like the peak plasma concentration and the time when it is achieved (Han et al., 2018). Bi-level transparency for time series forecasting: a top-down approach. We propose a top-down approach to trajectory comprehension and consequently two levels of transparency for time series forecasting: (level 1) understanding how the trend (the general shape of the trajectory) changes as we modify the input, and (level 2) understanding how the properties of the current trend (e.g., minimum value) change as we modify the input. To illustrate this, let us consider the following example. Forecasting a tumor volume trajectory from patient’s baseline covariates and drug dose Understanding a model like this may include answering questions such as: (Liao et al., 2020) - **What if:** “What would happen to the model’s prediction if a specific covariate changes?” - **How to be that:** “How should the covariates be modified to get a different prediction?” - **How to still be this:** “What range of drug dose values keeps the prediction the same?” We characterize the difference between predicted trajectories on two levels, which enables us to answer concrete questions about each level such as | Level 1 (trends) | Level 2 (properties) | |------------------|----------------------| | “Would the predicted tumor volume keep decreasing if we adjusted the treatment?” | “What feature changes would lower the minimum tumor volume?” | We explain trends and properties in detail in Section 2, and then formalize them in Section 4. It is worth noting that answering such questions with the current bottom-up approaches may often be futile since the notion of a “different prediction” (based on individual time points or norms such as $L^p$) may be non-interpretable (see Section 2) or simplistic. We demonstrate how our framework can enable answering such questions in Figure 1 and more thoroughly in Appendix E.1. **Time series forecasting based on static features.** Our work focuses on understanding the change in the predicted trajectory. However, time series models take many types of inputs, including time series and exogenous features. To provide a clear exposition of our framework, we focus on one specific input type: static features. Time series forecasting based on static features has applications in many domains ranging from finance through medicine and pharmacology to physics (see Section 3). In Section 5, we introduce TIMEVIEW—a transparent ML model for time series forecasting based on static features. As with many transparent models (e.g., GAMs, decision trees), model visualization is crucial for its interpretability. In Section 7, we demonstrate a visualization tool based on interactive plots that allows for the examination of both the higher and lower-level features of the predicted trajectories and how they change based on the input (Figure 1). ![Figure 1](image-url) **Figure 1:** Snapshot of our dynamical visualization of TIMEVIEW. Our model adheres to bi-level transparency—a top-down approach that focuses on the trend of the trajectory (Level 1) and the properties of the particular trend, e.g., transition points (Level 2). The left panel shows how the trajectory trend changes when one of the features is perturbed. For instance, the tumor will increase if we lower the dose and decay if we increase the dose. The right panel investigates the position ($y$-coordinate) of the second transition point (local minimum) as the initial tumor volume changes. **Contributions.** We introduce bi-level transparency, a novel top-down framework for time series forecasting that allows for a holistic understanding of the entire trajectory through trends and properties (Section 2). We formalize it by introducing the notions of motifs and compositions (Section 4). Based on the new formalism, we develop TIMEVIEW, Time series Interpretable Model with Effective VIsualization (Section 5). We demonstrate how its visualization aids in model comprehension while exhibiting only a minor performance drop compared to black-box models (Section 7). 2 TRANSPARENCY FOR TIME SERIES FORECASTING 2.1 SETUP Time series forecasting. A general ML model is a function $f$ mapping samples from the input space $\mathcal{X}$ to the output space $\mathcal{Y}$. We say that $f$ issues static predictions when $\mathcal{Y}$ is a subset of $\mathbb{R}$ (a regression model) or a finite set of labels $\{1, \ldots, K\}$ (a classification model). In contrast, we define $f$ to be a time series forecasting model (or just forecasting model) when $\mathcal{Y}$ is a space of trajectories. A trajectory is a function $y : \mathcal{T} \rightarrow \mathbb{R}$, where $\mathcal{T} \subset \mathbb{R}$ is a set of time points. Although the conceptual framework in Section 2 is agnostic to the nature of $\mathcal{T}$, our work focuses on settings where $\mathcal{T}$ is an interval $[0, T] \subset \mathbb{R}$, where $T \in \mathbb{R}$ is a time horizon, and the underlying trajectory is continuous. Note, in practice, we only observe discrete samples of $y$, which may be irregular and noisy. Transparency. We assume the following general definition of transparency: ML model is transparent if we can understand the impact of the inputs on the prediction. In particular, how changing one (or a few) of the features would impact the output. This is crucial for counterfactual reasoning (e.g., “What would the model predict if the patient was female?”) or detecting anomalies (e.g., “Why does the model assign a significantly higher risk score if the patient’s age is changed from 64 to 65?”). Comprehending the change in the output. As discussed in Section 1, understanding the change in the output is crucial for answering important questions about the model (e.g., “What would happen to the model’s prediction if a specific feature changes?”). In a static setting, when the prediction is a single label, understanding the change in the output is relatively straightforward as only a few things can happen. In regression, the target variable can decrease, increase, or remain constant. In classification, the target variable can change from one option to another among a finite number of classes. In time series forecasting, when the prediction is a trajectory, understanding the change in the output is challenging because there are numerous ways a function can change (discussed further in Appendix E). Moreover, these changes need to be interpretable for humans. 2.2 BOTTOM-UP: CURRENT XAI APPROACH TO TRAJECTORY COMPREHENSION As trajectory is a function $y : \mathcal{T} \rightarrow \mathbb{R}$, current XAI techniques for time series forecasting focus on understanding the impact of the inputs on $y(t)$ for a particular $t \in \mathcal{T}$. For instance, the values in saliency methods are calculated independently for different prediction horizons (Leung et al., 2023), and might be later aggregated. Inspired by the motivations of rough path theory (Lyons, 2014; Fermanian et al., 2023), we call the current comprehending strategy bottom-up. It means the trajectory is understood by looking at its values at individual time points, and subsequently, more information is gained by looking at more points. However, we argue that this strategy for trajectory understanding is not optimal in many scenarios. In particular, it is not a natural way for people to understand trajectories, and it is challenging to convey time-varying trends and global features by simply looking at individual time points in isolation. Inconsistent with the natural way people understand trajectories. Standard representation of a trajectory $y : [0, T] \rightarrow \mathbb{R}$ is a line graph. Research on graph comprehension (Zacks & Tversky, 1999; Xi, 2010) suggests that people understand line graphs in terms of trends rather than individual values. For instance, “when $x$ increases, $y$ also increases”. They also tend to focus on the minimum and maximum values and trend reversals (Carswell et al., 1993). Thus, understanding a (continuous) trajectory by individual values is unnatural for humans. See Appendix E for more details. Increased cognitive load. As mentioned above, the bottom-up approach requires an increasing number of values to understand the trajectory better. This becomes problematic when we want to understand any change in the trajectory, as it places the cognitive burden on the human interpreter to piece together changes in trends from changes at individual time steps. Unsuitable for global features. A bottom-up approach may be sufficient when we are interested in a particular time point (e.g., 5-year survival rate) or when there are only a few time points of interest. However, we often want to comprehend the whole trajectory at once. For instance, when administering a drug, we are interested in understanding the entire drug concentration curve, including properties like peak plasma concentration and the time when it is achieved (Han et al., 2018). --- 1 Another example of a model issuing static predictions is a multi-output regression where $\mathcal{Y} \subset \mathbb{R}^K$. 2 Note, “bottom-up” refers to how the trajectory is comprehended, not how the prediction is generated. 2.3 Top-Down: New Approach to Trajectory Comprehension To address the shortcomings of the bottom-up approach, we propose a top-down approach to understanding a trajectory. It is motivated by the fact that humans tend to describe trajectories by referring to the trends and properties it exhibits rather than just the values it attains (Carswell et al., 1993). Consider the natural language descriptions of trajectories presented in Table 1. In all these examples, we have a trend—the general shape of the function (e.g., “increasing”, “stays below”), and properties—the details of the particular trend (e.g., “for the last 10 years”, “below 100mg/dl”). Table 1: When we describe a trajectory, we often refer to the trends and properties it exhibits. We use this observation in our definition of bi-level transparency, which ultimately informs the design of our method. The table shows three examples of descriptions of trajectories with their corresponding trends and properties. | Description | Trend | Properties | Visualization | |--------------------------------------------------|-----------|---------------------|---------------| | “The GDP has been steadily increasing for the last 10 years” | increasing | for the last 10 years | | | “The blood sugar level in non-diabetic patients should stay below 100mg/dl while fasting” | stay below | below 100mg/dl | | | “Tumor volume decreases, obtains a minimum after 6 months, and then increases” | decreases then increases | minimum at 6 months | | The top-down approach addresses shortcomings of the bottom-up approach, i.e., it is more consistent with the natural way people understand trajectories and conveys time-varying trends and global features in an interpretable way. Moreover, it is also compatible with the scientific approach to analyzing various trajectories. For instance, while studying dynamical systems, we are often interested in understanding bifurcations—a qualitative change in the behavior of a system as the parameter changes (Blanchard et al., 2012). This corresponds to understanding the inputs where the trend of the trajectory changes. Bi-level transparency: understanding how the trends and properties change. By using the top-down approach above, we do not need all the trajectory values to understand it. Instead, we can focus on the trends and properties of the trajectory and only access the exact values when necessary. This is how we can achieve an interpretable model: instead of tracking the individual values of the trajectory (as in bottom-up approaches), we track how the trends and properties of the trajectory change as we vary the input. Thus, we refine the definition of transparency and adapt it specifically for time series forecasting. We call it bi-level transparency. A time series forecasting model is (bi-level) transparent if the following holds. • (Level 1) We can understand the impact of the input on the trends of the trajectory. • (Level 2) We can understand the impact of the input on the properties of a given trend. 3 Time Series Forecasting From Static Features Bi-level transparency unveaves the “output” part of transparency into two separate objects: trends and properties. Thus, it provides a concrete answer to the question: what does it mean to understand the change of the output? However, time series models may take many types of inputs, including static features, information about the future (e.g., upcoming holiday dates), and other exogenous time series (Lim et al., 2021). To provide a clear exposition of our framework, develop formalism, and demonstrate a practical implementation, we focus on settings where inputs are static features. Real life settings. Time series forecasting from static features is frequently encountered in medicine and pharmacology, where we are interested in predicting the disease progression or the drug concentration based on the patient’s covariates. Static features can also include the dosage/strength of the treatment or even the time and type of intervention. If necessary, one or a few initial observations at pre-specified times can also be considered to be static features. More examples of such scenarios can be found in finance (predicting stock values from the company’s static data), time-to-event problems (predicting the survival or the hazard function), or modeling any 1D dynamical system from... its initial conditions. In some scientific or engineering domains, time can be even replaced by other continuous variables. For instance, when modeling stress-strain or current-voltage curves. **Problem formulation.** Let \( T \in \mathbb{R} \) be the *time horizon*. Each sample consists of static features \( x^{(d)} \in \mathbb{R}^M \), where \( M \in \mathbb{N} \) is the number of features, and a discretely sampled trajectory \( y^{(d)} \in \mathbb{R}^{N_d} \) at times \( t^{(d)} \in \mathbb{R}^{N_d} \), where \( N_d \in \mathbb{N} \) is the number of measurements for the \( d \)-th sample. We assume that \( y^{(d)} \) consists of noisy samples of some true underlying continuous trajectory \( y_*^{(d)} : [0, T] \to \mathbb{R} \). Given a dataset \( \{x^{(d)}, y^{(d)}, t^{(d)}\}_{d=1}^D \), the task is to find a model that matches static covariates \( x \in \mathbb{R}^M \) to a trajectory \( \hat{y} : [0, T] \to \mathbb{R} \) such that \( \hat{y} \) minimizes the expected value of \( \frac{1}{T} \int_0^T (\hat{y}(t) - y_*(t))^2 dt \) for all test samples. We denote the class of predicted trajectories as \( \hat{\mathcal{Y}} \). ### 4 Motifs and Compositions In this section, we propose a way to formalize the notion of a trend by defining the composition of a trajectory. The composition is a sequence of motifs where each motif describes the current “shape” of the trajectory at a specific interval. For instance, we can choose a set of three motifs: “increasing” (\( i \)), “decreasing” (\( d \)), and “constant” (\( c \)). Then, we can divide a trajectory into a few segments, so each can be classified as being in one of these motifs throughout the interval. Thus, we can assign a sequence of motifs to this trajectory - a composition. For instance, a ReLU function on \([-1, 1]\) has a composition (“constant”, “increasing”) or just (\( c, i \)), whereas a sin on the interval \([0, 2\pi]\) has a composition (\( i, d, i \)). The motifs can be chosen based on the application and the required granularity. The points between motifs are called transition points, and their coordinates can be mapped to the properties of a trend (see Figure 2). | Concepts | Mathematical objects | |---------------------------|----------------------| | Trend | Composition | | Part of a trend | Motif | | Properties of a trend | Transition points | Figure 2: Correspondence between concepts in Section 2 and mathematical objects in Section 4. **Notation.** We say \( I \) is an interval (of \( \mathbb{R} \)) if it is an open interval, closed interval, or half-closed interval. The interval has to contain more than one point. We denote the set of all intervals on \( \mathbb{R} \) as \( \mathcal{I} \). Let \( c \in \mathbb{R} \), we denote the shifted interval as \( I + c = \{x + c \mid x \in I\} \). Let \( I \subset \mathbb{R} \) be any interval, we call any function \( f : I \to \mathbb{R} \) an *interval function* and we denote its domain as \( \text{dom}(f) \). We denote the set of all interval functions as \( \mathcal{F} \). **Definition 1 (Motif).** A motif \( s \) is a binary relation between the set of interval functions \( \mathcal{F} \) and the set of intervals \( \mathcal{I} \) (i.e., \( s \subset \mathcal{F} \times \mathcal{I} = \{(f, I) \mid f \in \mathcal{F}, I \in \mathcal{I}\} \)). We denote \( (f, I) \in s \) as \( f|I \sim s \) and read it as “\( f \) on \( I \) has a motif \( s \)”. Each motif \( s \) needs to be: - **well-defined**, i.e., for any \( f \in \mathcal{F} \), and any \( I \in \mathcal{I} \), \[ f|I \sim s \implies I \subseteq \text{dom}(f) \quad (1) \] - **translation-invariant**, i.e., for any \( I \in \mathcal{I} \), and any \( f \in \mathcal{F} \), \[ f|I \sim s \iff f \circ (x - c)|(I + c) \sim s \forall c \in \mathbb{R} \quad (2) \] Now, we would like to assign a minimal sequence of motifs to a given trajectory: a composition. **Definition 2 (Composition).** Let \( f : I \to \mathbb{R} \) be an interval function and \( S \) be a set of motifs. A *motif sequence* of \( f \) in \( S \) is a finite sequence of motifs \( (s_1, \ldots, s_d) \), such that there exists an interval partition \( \{I_1, \ldots, I_d\} \) of \( I \) such that \( f|I_j \sim s_j \forall j \in [d] \). A *composition* of \( f \) in \( S \) is the shortest motif sequence of \( f \) in \( S \). The points between the intervals are called the transition points. The set of all compositions for a given set of motifs \( S \) is denoted by \( C_S \). A set of motifs \( S \) is called compatible with a subset \( \mathcal{F}' \subset \mathcal{F} \) if for every \( f \in \mathcal{F}' \) there exists a unique composition, denoted \( C_S[f] \). Compatibility between the set of motifs and the set of trajectories is crucial for an ML model that employs bi-level transparency as we want to assign a composition to every possible prediction unambiguously and, in turn, to every feature vector. We call this assignment a composition map. --- 3For a definition of interval partition, see Appendix A Definition 3 (Composition map). Let a set of motifs \( S \) be compatible with some subset \( F' \subset F \). Let \( g : \mathbb{R}^M \rightarrow F' \) be an ML model for time series forecasting, where \( M \in \mathbb{N} \) is the number of static features. A composition map is denoted \( M_S : \mathbb{R}^M \rightarrow C_S \) defined by \( M_S(x) = C_S[g(x)] \). To understand a model \( g \), it is crucial to understand its composition map with respect to some meaningful set of motifs. We discuss examples of motifs and when they can be helpful in Appendix A. We define a particular set of motifs that we call dynamical motifs (see Table 2). They encode information about the trajectory’s first and second derivatives. Moreover, the transition points between these motifs correspond to local minima, maxima, and inflection points. These are the exact properties used in a standard mathematical exercise of function sketching whose goal is precisely to understand the function. These motifs form a backbone of TIMEVIEW introduced in Section 5. Dynamical motifs are depicted in Table 2 and defined formally in Example 5 in Appendix A. Table 2: We introduce a set of dynamical motifs that are often important to understand trajectories. | Symbol | Name | Definition | Visualization | |--------|-----------------------------|-------------------------------------------------|---------------| | \( s_{+0} \) | Straight line with positive slope | \( f(x) = ax + b, a > 0, b \in \mathbb{R} \) | ![Straight line with positive slope](image1.png) | | \( s_{-0} \) | Straight line with negative slope | \( f(x) = ax + b, a < 0, b \in \mathbb{R} \) | ![Straight line with negative slope](image2.png) | | \( s_{00} \) | Straight line with zero slope | \( f(x) = b, b \in \mathbb{R} \) | ![Straight line with zero slope](image3.png) | | \( s_{++} \) | Increasing and strictly convex | \( f'(x) > 0, f''(x) > 0 \) | ![Increasing and strictly convex](image4.png) | | \( s_{+-} \) | Increasing and strictly concave | \( f'(x) > 0, f''(x) < 0 \) | ![Increasing and strictly concave](image5.png) | | \( s_{-+} \) | Decreasing and strictly convex | \( f'(x) < 0, f''(x) > 0 \) | ![Decreasing and strictly convex](image6.png) | | \( s_{--} \) | Decreasing and strictly concave | \( f'(x) < 0, f''(x) < 0 \) | ![Decreasing and strictly concave](image7.png) | 5 TIMEVIEW Based on our formalism in Section 4, we introduce Time series Interpretable Model with Effective VIsualization (TIMEVIEW). This framework consists of two parts: the predictive model based on B-Spline basis functions, and an algorithm for calculating the composition map. This map aims to facilitate model visualization that complements our framework and is demonstrated in Section 7. Realizing bi-level transparency through dynamical motifs. To realize bi-level transparency through dynamical motifs, we need to (1) understand the relation between the feature vectors \( x \) and the compositions of the predicted trajectories, and (2) understand the relation between the feature vectors \( x \) and the transition points of a given composition. To fulfill these conditions, we need to find a space of trajectories \( \hat{Y} \) satisfying the following criteria. 1. The set of dynamical motifs \( S \) is compatible with the class of predicted trajectories \( \hat{Y} \). 2. For every \( \hat{y} \in \hat{Y} \) we can calculate its composition \( C_S[\hat{y}] \). Cubic splines are a class of functions that satisfies both criteria mentioned above. We demonstrate that dynamical motifs are compatible with cubic splines in Appendix B. Moreover, it is easy to calculate the dynamical composition of a cubic spline as it is a piece-wise function consisting of cubic polynomials connected at knots. We describe the exact procedure below and in Appendix C. B-Spline basis functions. We describe cubic splines as linear combinations of B-Spline (De Boor 1978) basis functions. Let \( \phi_b : [0, T] \rightarrow \mathbb{R} \) be a \( b \)-th B-Spline basis function of degree 3. Given a set of \( B \) basis functions \( \{\phi_b\}_{b \in [B]} \), we can express a cubic spline as a linear combination \( \hat{y}(t) = \sum_{b=1}^{B} c_b \phi_b(t) \), where \( c_b \in \mathbb{R} \forall b \in [B] \). Thus, each spline is described by a latent vector \( c \in \mathbb{R}^B \). Architecture. To match a feature vector \( x \in \mathbb{R}^M \) to a vector \( c \in \mathbb{R}^B \) describing a time series, we use an encoder \( h : \mathbb{R}^M \rightarrow \mathbb{R}^B \). Ultimately, we define our model \( g : \mathbb{R}^M \rightarrow \hat{Y} \) as \[ g(x)(t) = \hat{y}_x(t) = \sum_{b=1}^{B} h(x)_b \phi_b(t) \] (3) Implementation. We implement the encoder $h$ as a fully-connected neural network. We choose a set of knots for the B-Spline basis functions based on the training dataset using a heuristic algorithm described in Appendix C (note, the number of knots controls explainability-performance trade-off). The values of $\phi_b$ at times $t^d$ ($\forall b \in [B] \ \forall d \in [D]$) can be efficiently precomputed before the training using scipy library’s BSpline class. We want to minimize the MSE loss between the predicted values of the trajectory $\hat{y}$ at points $t^d$ and the ground truth $y^d$. We also add L2 regularization loss $L_{1,2}$, so that the B-Spline coefficients (and thus the compositions) do not change too abruptly. The final objective is: $$L = \frac{1}{D} \sum_{d=1}^{D} \left( \frac{1}{N_d} \sum_{j=1}^{N_d} \left( y^d_j - \sum_{b=1}^{B} h(x^d)_b \phi_b(t^d_j) \right)^2 \right) + \alpha L_{1,2}(g)$$ We minimize it using gradient descent. The block diagram describing the training procedure can be seen in Figure 3. Implementation details, including the pseudocode, can be found in Appendix C. As with many transparent models (e.g., GAMs, Decision Trees), model visualization is crucial for its interpretability. After TIMEVIEW is trained, we compute the composition map (see Definition 3) and demonstrate how we can visualize it (or a part of it) in Section 7. To compute the composition map, we need to perform composition extraction from a predicted trajectory, i.e., calculate $C_S[\hat{y}]$. Composition extraction. As described earlier, each trajectory is described by a latent vector $c \in \mathbb{R}^B$ and defined as a linear combination of B-Splines, $\hat{y}(t) = \sum_{b=1}^{B} c_b \phi_b(t)$. Each $\phi_b$ is a piece-wise polynomial defined over the intervals determined by the internal knots $(t_1, t_2, \ldots, t_{B-2})$, chosen by our heuristic algorithm (Appendix C). We can associate a cubic in a monomial basis $(t^3, t^2, t, 1)$ with each of these intervals for each basis function (this can be precomputed). We call these cubics $\psi_{b,k}$, where $k$ ranges from 1 to $B - 3$ (the number of intervals). Given a vector $c$, we can now calculate the cubic in a monomial basis for each interval. The $k$th interval is just $\sum_{b=1}^{B} c_b \psi_{b,k}$. As it is just a cubic polynomial, we can readily calculate its first and second derivatives and then assign a composition to the $k$th interval. We repeat this process for every other interval, connect all the compositions, and merge some neighboring motifs if they are the same. Ultimately, we get a global composition for the whole $\hat{y}$. See Appendix C for the pseudocode and the block diagram description. 6 RELATED WORKS We explain how our work intersects with related areas of ML. Refer to Appendix E for more details. Transparent models for static predictions. Standard transparent methods for static predictions include linear/logistic regression, scoring systems (Ustun & Rudin, 2016), decision trees/rule lists. --- 4For B-Splines of degree 3, $B - 2$ knots produce $B$ basis functions. and generalized additive models (GAMs) (Hastie & Tibshirani, 1986; Lou et al., 2012). Such methods can often be used for time series forecasting by passing the time \( t \) as an additional feature. They often satisfy bi-level transparency but have poor performance. In particular, all trajectories predicted by linear regression and GAMs are parallel; thus, they cannot model different trends (Section 7). Decision Trees capture non-additive interactions, enabling flexible forecasting models. However, they require many splits to approximate the ground truth, leading to poor performance or incomprehensibility (Section 7). **Closed-form expressions.** Symbolic Regression (Schmidt & Lipson, 2009; La Cava et al., 2021) aims to fit closed-form expressions to data, i.e., mathematical formulas composed of a finite number of variables, binary operators (+, −, ×, ÷), well-known functions (e.g., sin, exp, log), and constants. For instance, \( \sin(x^2) = e^{2.1y} \). Differential equations represent another category of mathematical expressions that draw significant interest in the scientific community. Numerous algorithms have been proposed for discovering Ordinary Differential Equations (ODEs) (Brunton et al., 2016; Qian et al., 2022) and Partial Differential Equations (Rudy et al., 2017; Long et al., 2019). Mathematical expressions may not always satisfy bi-level transparency. In fact, the reparametrization of equations to reflect key theoretical quantities is an active area of research (Preacher & Hancock, 2015). **Feature importance for time series.** While our research focuses on transparent models, many saliency (or feature importance) methods have been developed to highlight which features the model is sensitive to (Ribeiro et al., 2016; Lundberg & Lee, 2017). Although these methods have been extended to time series inputs (Crabbé & Schaar, 2021; Leung et al., 2023), limited work has been done to extend them specifically to time series outputs. Current XAI techniques either assume the output is a scalar (Siddiqui et al., 2019) (e.g., time series classification (Hao & Cao, 2020)), treat the trajectory as a single object (Gao et al., 2023)—thus do not show how a feature changes the trajectory—or show a saliency map at each predicted point separately (Pan et al., 2020), thus allowing only for a bottom-up understanding of the predicted trajectory. The last category also includes many recently proposed methods with attention mechanisms (Alaa & van der Schaar, 2019; Lim et al., 2021). We contrast our framework with feature importance techniques in Appendix E. **Shapelets and motifs.** As our method discusses the shape of the trajectory, it may seem related to shapelet-based methods (Ye & Keogh, 2009). However, these methods are usually used for data mining and classification tasks. They aim to find subsequences of a time series that represent the most important patterns of each class and thus can be used to distinguish between them (Chen et al., 2022). Similarly, motif discovery identifies short repeating patterns in the time series (Torkamani & Lohweg, 2017) usually for insights into the problem or classification tasks. ### 7 TIMEVIEW IN ACTION **Answering questions.** Our interactive visualization tool for TIMEVIEW allows for answering questions such as “What If”, “How to be that”, and “How to still be this” from the XAI Question Bank (Liao et al., 2020) discussed in Section 1. As explained earlier, answering such questions with the current bottom-up approaches may often be futile since the notion of a “different prediction” may be non-interpretable or simplistic. In contrast, TIMEVIEW allows the analysis of a trajectory change at two levels, i.e., the composition of the trajectory or the coordinate of the transition point. **Visualizing perturbations.** We can visualize the effect of perturbing one or two features at a time using colorful bands (as in Figure 1) and colorful 2D contour plots (Figure 4). In the left panel of Figure 7, we have a movable slider for each feature that changes the predicted trajectory in the center. The colors on the band below the slider signify the composition of the trajectory if the feature is in the corresponding range. This allows us to understand how the trend of the trajectory changes as we change each feature (level 1 of bi-level transparency). To understand the properties of this trend (level 2), we choose any of the transition points in the central plot, and we can analyze its position with respect to the chosen feature on the plot on the right. It currently shows how the \( y \)-coordinate of the second transition point (local minimum) increases as initial tumor volume increases. (Figure 4) shows how we can visualize the effect of changing two features at a time. Each color in the contour plot corresponds to a different composition, so it is clear how changing the features influences the composition of the trajectory. Please see Appendix E for a more in-depth discussion. Comparison with other methods. In the absence of time series methods fulfilling bi-level transparency, we adapt static transparent methods, such as linear regression, decision trees, and GAMs (Lou et al., 2012; Nori et al., 2019) to time series forecasting by treating time as a feature, denoted as Linear-T, DecisionTree-T, and GAM-T. We also compare with methods discovering closed-form expressions for trajectories, such as PySR for symbolic regression (Cranmer, 2020), and SINDy for ODE discovery (Brunton et al., 2016). We also include black-box models RNN, ∆t-RNN, and state-of-the-art tree-based models adapted to time series forecasting (XGB-T (Chen & Guestrin, 2016), LGBM-T (Ke et al., 2017), CatBoost-T (Prokhorenkova et al., 2018)). Experiments were conducted on four real-world datasets (Airfoil (Brooks et al., 1989), flchain (Dispenzieri et al., 2012), Stress-Strain (Aakash et al., 2019), and Tacrolimus (Woillard et al., 2011)) and three synthetic ones (Sine, Beta, and Tumor, the latter based on a model from (Wikerson et al., 2017)). The synthetic datasets are constructed to contain trajectories exhibiting many different trends. Figure 1, Figure 4, Figure 1 show TIMEVIEW fitted to Sine, Beta, and Tumor datasets. As shown in Table 3, TIMEVIEW outperforms the transparent methods and closed-form expression on most datasets and achieves comparable performance to the black boxes. Details about the experiments can be found in Appendix D. Table 3: Comparison between TIMEVIEW, other transparent methods, closed-form expressions, and black boxes. The numbers denote a mean squared error. The lower, the better. Boldface red denotes the best black box results, boldface orange denotes the best closed-form expression results, and boldface green denotes the best transparent results. † denotes failure to converge. Note, RNN only works for regular time series. | Method | Airfoil | flchain | Stress-Strain | Tacrolimus | Tumor | Sine | Beta | |-----------------|---------|---------|---------------|------------|-------|------|------| | **Black boxes** | | | | | | | | | RNN | - | 0.26 ± 0.02 | - | - | 0.02 ± 0.01 | 0.02 ± 0.01 | 0.02 ± 0.02 | | ∆t-RNN | 0.17 ± 0.02 | 0.27 ± 0.01 | 0.14 ± 0.01 | 0.41 ± 0.05 | 0.02 ± 0.01 | 0.02 ± 0.01 | 0.02 ± 0.02 | | XGB-T | 0.09 ± 0.00 | 0.20 ± 0.00 | 0.02 ± 0.00 | 0.29 ± 0.01 | 0.01 ± 0.00 | 0.53 ± 0.00 | 0.07 ± 0.00 | | LGBM-T | 0.11 ± 0.00 | 0.20 ± 0.00 | 0.08 ± 0.00 | 0.29 ± 0.00 | 0.01 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | | CatBoost-T | 0.09 ± 0.01 | 0.21 ± 0.00 | 0.05 ± 0.00 | 0.37 ± 0.04 | 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | | **Closed-form expressions** | | | | | | | | | PySR | 0.26 ± 0.03 | 0.22 ± 0.01 | 0.48 ± 0.11 | 0.36 ± 0.04 | 0.10 ± 0.02 | 0.05 ± 0.04 | 0.24 ± 0.03 | | SINDy | 0.61 ± 0.00 | 0.39 ± 0.00 | † | 1.11 ± 0.00 | 0.07 ± 0.00 | 1.30 ± 0.00 | 2.74 ± 0.00 | | **Transparent models** | | | | | | | | | Linear-T | 0.37 ± 0.00 | 0.34 ± 0.00 | 0.66 ± 0.00 | 0.57 ± 0.00 | 0.68 ± 0.00 | 0.99 ± 0.00 | 1.03 ± 0.00 | | DecisionTree-T | 0.36 ± 0.00 | 0.21 ± 0.00 | 0.15 ± 0.00 | 0.31 ± 0.00 | 0.22 ± 0.00 | 0.10 ± 0.00 | 0.34 ± 0.00 | | GAM-T | 0.28 ± 0.01 | 0.32 ± 0.00 | 0.09 ± 0.00 | 0.38 ± 0.00 | 0.54 ± 0.00 | 0.54 ± 0.00 | 0.69 ± 0.00 | | TIMEVIEW | 0.13 ± 0.01 | 0.24 ± 0.02 | 0.04 ± 0.00 | 0.31 ± 0.03 | 0.00 ± 0.00 | 0.02 ± 0.00 | 0.04 ± 0.00 | 8 DISCUSSION AND CONCLUSION Applications. We believe bi-level transparency and our mathematical framework can inspire future XAI methods. For instance, note, that models adhering to our framework (like TIMEVIEW) provide additional output next to the standard forecasted trajectory: the current composition and the coordinates of the transition points. Traditional XAI techniques for regression and classification can be applied to these additional outputs, instead of individual trajectory points, to gain more meaningful explanations. Thus, techniques such as feature importance methods (Lundberg & Lee, 2017), local surrogates (Ribeiro et al., 2016), and counterfactual explanations (Karimi et al., 2020) can now be extended to time series forecasting settings. These, in turn, can open domains where the applicability of ML has been limited due to transparency concerns, including medicine, finance, and science. Limitations and open challenges. TIMEVIEW is a particular application of bi-level transparency for time series forecasting from static features. We hope future works will extend it to settings where the input may contain the previous part of the trajectory or other exogenous time series (further discussion in Appendix E). Ethics statement. In this paper, we present a novel conceptual framework for enhancing transparency in the domain of time series forecasting, accompanied by its practical implementation known as TIMEVIEW. A better understanding of machine learning models serves critical purposes such as model debugging and identifying and mitigating potential harmful biases. However, XAI techniques can also be misused to foster unwarranted trust in models or to merely achieve surface-level compliance with regulatory standards. As highlighted in our paper, domains such as medicine and pharmacology involve high-stakes scenarios. Therefore, prior to deploying our model in such contexts, a rigorous examination is imperative to ensure it does not endorse decisions that could prove detrimental to individuals’ well-being. Reproducibility statement. All mathematical definitions are provided in Section 4 and Appendix A. The proofs of theoretical results are shown in Appendix B. The implementation, including block diagrams and pseudocode, is discussed in Section 5 and in Appendix C. The experiment settings are discussed in Section 7 and Appendix D. The code to reproduce the results and for the visualization tool can be found at https://github.com/krzysztof-kacprzyk/TIMEVIEW and at the wider lab repository https://github.com/vanderschaarlab/TIMEVIEW. Acknowledgments. This work was supported by Roche and AstraZeneca. We want to thank Katarzyna Kobalczyk, Fergus Imrie, Andrew Rashbass, and anonymous reviewers for their useful comments and feedback on earlier versions of this work. REFERENCES B. S. Aakash, JohnPatrick Connors, and Michael D. Shields. Stress-strain data for aluminum 6061-T651 from 9 lots at 6 temperatures under uniaxial and plane strain tension. Data in Brief, 25:104085, August 2019. ISSN 2352-3409. doi: 10.1016/j.dib.2019.104085. Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2019. Ahmed M Alaa and Mihaela van der Schaar. Attentive State-Space Modeling of Disease Progression. Advances in Neural Information Processing Systems, 32:11338–11348, 2019. Elaine Angelino, Nicholas Larus-Stone, Daniel Alabi, Margo Seltzer, and Cynthia Rudin. Learning Certifiably Optimal Rule Lists for Categorical Data. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2018. Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58:82–115, June 2020. ISSN 1566-2535. doi: 10.1016/j.inffus.2019.12.012. L. Biggio, T. Bendinelli*, A. Neitz, A. Lucchi, and G. Parascandolo. Neural Symbolic Regression that Scales. In 38th International Conference on Machine Learning, July 2021. Paul Blanchard, Robert L. Devaney, and Glen R. Hall. Differential Equations. Cengage Learning, July 2012. ISBN 978-1-133-38808-1. Thomas F. Brooks, D. Stuart Pope, and Michael A. Marcolini. Airfoil self-noise and prediction, July 1989. Steven L. Brunton, Joshua L. Proctor, and J. Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 113(15):3932–3937, April 2016. ISSN 0027-8424, 1091-6490. doi: 10.1073/pnas.1517384113. C. Melody Carswell, Cathy Emery, and Andrea M. Lonon. Stimulus complexity and information integration in the spontaneous interpretations of line graphs. Applied Cognitive Psychology, 7(4):341–357, 1993. ISSN 1099-0720. doi: 10.1002/acp.2350070407.
FvK2noilxT
In Sec 4.1 Training dataset, why did the authors use different standard deviations to noise the MANO parameters for translation, rotation, and pose parameters Does the way to noise the training sets affect the learning?
GENEOH DIFFUSION: TOWARDS GENERALIZABLE HAND-OBJECT INTERACTION DENOISING VIA DENOISING DIFFUSION Xueyi Liu1,3 Li Yi1,2,3 1Tsinghua University 2Shanghai AI Laboratory 3Shanghai Qi Zhi Institute Project website: meowuu7.github.io/GeneOH-Diffusion ABSTRACT In this work, we tackle the challenging problem of denoising hand-object interactions (HOI). Given an erroneous interaction sequence, the objective is to refine the incorrect hand trajectory to remove interaction artifacts for a perceptually realistic sequence. This challenge involves intricate interaction noise, including unnatural hand poses and incorrect hand-object relations, alongside the necessity for robust generalization to new interactions and diverse noise patterns. We tackle those challenges through a novel approach, GeneOH Diffusion, incorporating two key designs: an innovative contact-centric HOI representation named GeneOH and a new domain-generalizable denoising scheme. The contact-centric representation GeneOH informatively parameterizes the HOI process, facilitating enhanced generalization across various HOI scenarios. The new denoising scheme consists of a canonical denoising model trained to project noisy data samples from a whitened noise space to a clean data manifold and a “denoising via diffusion” strategy which can handle input trajectories with various noise patterns by first diffusing them to align with the whitened noise space and cleaning via the canonical denoiser. Extensive experiments on four benchmarks with significant domain variations demonstrate the superior effectiveness of our method. GeneOH Diffusion also shows promise for various downstream applications. Figure 1: Trained only on limited data, GeneOH Diffusion can clean novel noisy interactions with new objects, hand motions, and unseen noise patterns (Fig. (a)), produces diverse refined trajectories with discrete manipulation modes (Fig. (b)), and is a practical tool for many applications (Fig. (c)). 1 INTRODUCTION Interacting with objects is an essential part of our daily lives, and accurately tracking hands during these interactions has become crucial for various applications, such as gaming, virtual and augmented reality, robotics, and human-machine interaction. Yet, this task is highly complex and ill-posed due to numerous factors like intricate dynamics involved and hand-object occlusions. Despite best efforts, existing tracking algorithms often struggle with producing plausible and realistic results. To better cater to the requirements of downstream tasks, noisy tracking results usually need to be refined. Given a hand-object interaction (HOI) sequence with errors, the HOI denoising aims to pro- duce a natural interaction sequence free of artifacts such as penetrations. In this work, we assume the object poses are tracked accurately and focus on refining the hand trajectory following (Zhou et al., 2022; Grady et al., 2021; Zhou et al., 2021b; Zhang et al., 2021). This setting is important with many practical demands in applications such as cleaning synthesized motions (Tendulkar et al., 2023; Huang et al., 2023; Ghosh et al., 2023; Wu et al., 2022), refining motion-retargeted trajectories (Hecker et al., 2008; Tak & Ko, 2005; Aberman et al., 2019), and virtual object manipulations (Oh et al., 2019; Kato et al., 2000; Shaer et al., 2010). Early approaches relied on manually designed priors (Dewaele et al., 2004; Hackenberg et al., 2011), which, however, proved inadequate in handling intricate noise. More recent endeavors have shifted towards learning denoising priors from data (Zhou et al., 2022; 2021b; Grady et al., 2021), yet the existing designs still fall short of providing a satisfactory solution. Leveraging data priors for HOI denoising is challenged by several difficulties. First, the interaction noise is highly complex, covering unnatural hand poses, erroneous hand-object spatial relations, and inconsistent hand-object temporal relations. Second, hand movements, hand-object relations, and the noise pattern may vary dramatically across different HOI tracks. For instance, the noise pattern exhibited in hand trajectories estimated from videos differs markedly from that resulted from inaccurate capturing or annotations. A denoising model is often confronted with such out-of-domain data and is expected to handle them adeptly. However, such a distribution shift poses a substantial challenge for data-driven models. Lacking an effective solution, prior works always cannot clean such complex interaction noise or can hardly generalize to unseen erroneous interactions. We propose **GeneOH Diffusion**, a powerful denoising method with strong generalizability and practical applicability (see Figure 1), to tackle the above difficulties. Our method resolves the challenges around two key ideas: 1) designing an effective HOI representation that can both informatively parameterize the interaction and facilitate the generalization by encoding and canonicalizing vital HOI information in a coordinate system induced by the interaction region; 2) learning a canonical denoiser that projects noisy data from a whitened noise space to the data manifold for domain-generalizable denoising. A satisfactory representation that parameterizes the high-dimensional HOI process for denoising should be able to represent the interaction process faithfully, highlight noises, and align different HOI tracks well to enhance generalization capabilities Therefore, we introduce **GeneOH**, Generalized contact-centric Hand-Object spatial and temporal relations. GeneOH encodes the interaction informatively, encompassing the hand trajectory, hand-object spatial relations, and hand-object temporal relations. Furthermore, it adopts a contact-centric perspective and incorporates an innovative canonicalization strategy. This approach effectively reduces disparities between different sequences, promoting generalization across diverse HOI scenarios. To enhance the denoising model’s generalization ability to novel noise distributions, our second effort centers on the denoising scheme side. We propose to learn a canonical denoising model that describes the mapping from a whitened noise space to the data manifold. The whitened noise space contains noisy data diffused from clean data in the training dataset via Gaussian noise at various noise scales. With the canonical denoiser, we then leverage a “denoising via diffusion” strategy to handle input trajectories with various noise patterns in a domain-generalizable manner. It first aligns the input to the whitened noise space by diffusing it via Gaussian noise. Subsequently, the diffused sample is cleaned by the canonical denoising model. To strike a balance between the denoising model’s generalization capability and the faithfulness of the denoised trajectory, we introduce a hyper-parameter that decides the scale of noise added during the diffusion process, ensuring the diffused sample remains faithful to the original input. Furthermore, instead of learning to clean the interaction noise through a single stage, we devise a progressive denoising strategy where the input is sequentially refined via three stages, each of which concentrates on cleaning one specific component of GeneOH. We conduct extensive experiments on three datasets, GRAB (Taheri et al., 2020), a high-quality MoCap dataset, HOI4D (Liu et al., 2022), a real-world interaction dataset with noise resulting from inaccurate depth sensing and imprecise vision estimations, and ARCTIC (Fan et al., 2023), a dataset featuring dynamic motions and changing contacts, showing the remarkable effectiveness and generalizability of our method. When only trained on GRAB, our denoiser can generalize to HOI4D with novel and difficult noise patterns and ARCTIC with challenging interactions, surpassing prior arts by a significant margin, as demonstrated by the comprehensive quantitative and qualitative comparisons. We will release our code to support future research. In summary, our contributions include: - An HOI denoising framework with powerful spatial and temporal denoising capability and unprecedented generalizability to novel HOI scenarios; • An HOI representation named GeneOH that can faithfully capture the HOI process, highlight unnatural artifacts, and align HOI tracks across different objects and interactions; • An effective and domain-generalizable denoising method that can both generalize across different noise patterns and clean complex noise through a progressive denoising strategy. 2 RELATED WORKS Hand-object interaction is an important topic for understanding human behaviors. Prior works towards this direction mainly focus on data collection (Taheri et al., 2020; Hampali et al., 2020; Guzov et al., 2022; Fan et al., 2023; Kwon et al., 2021), reconstruction (Tiwari et al., 2022; Xie et al., 2022; Qu et al., 2023; Ye et al., 2023), interaction generation (Wu et al., 2022; Tendulkar et al., 2023; Zhang & Tang, 2022; Ghosh et al., 2023; Li et al., 2023), and motion refinement (Zhou et al., 2022; Grady et al., 2021; Zhou et al., 2021b; Núñez, 2022). The HOI denoising task wishes to remove unnatural phenomena from HOI sequences with interaction noise. In real application scenarios, a denoising model would frequently encounter out-of-domain interactions, and is expected to generalize to them. This problem is then related to domain generalization, a general machine learning topic (Sicilia et al., 2023; Segu et al., 2023; Wang et al., 2023; Zhang et al., 2023; Jiang et al., 2022; Wang et al., 2022; Blanchard et al., 2011; Muandet et al., 2013; Dou et al., 2019), where a wide range of solutions have been proposed in the literature. Among them, leveraging domain invariance to solve the problem is a promising solution. Our work is related to this kind of approach, at a high level. However, what is the domain invariant information for the HOI denoising task, and how to encourage the model to leverage such information for denoising remains very tricky. We focus on designing invariant representations and learning a canonical denoiser for domain-generalizable denoising. Moreover, we are also related to intriguing works that wish to leverage data priors to solve the inverse problem (Song et al., 2023; Mardani et al., 2023; Tumanyan et al., 2023; Meng et al., 2021; Chung et al., 2022). For our task, we need to answer some fundamental questions regarding what are generalizable denoising priors, how to learn them from data, and how to leverage the prior to refine noisy input from different distributions. We’ll illustrate our solution in the method section. 3 HAND-OBJECT INTERACTION DENOISING VIA DENOISING DIFFUSION Given an erroneous hand-object interaction sequence with $K$ frames $(\hat{H}, O) = \{(\hat{H}_k, O_k)\}_{k=1}^K$, we assume the object pose trajectory $\{O_k\}_{k=1}^K$ is accurate following (Zhou et al., 2022; 2021b; Grady et al., 2021; Zhang et al., 2021) and aim at cleaning the noisy hand trajectory $\{\hat{H}_k\}_{k=1}^K$. This setting is of considerable importance, given its practical applicability in various domains (Tendulkar et al., 2023; Ghosh et al., 2023; Li et al., 2023; Wu et al., 2022; Hecker et al., 2008; Oh et al., 2019; Shaer et al., 2010). The cleaned hand trajectory should be free of unnatural hand poses, incorrect spatial penetrations, and inconsistent temporal hand-object relations. The hand trajectory should present visually consistent motions and adequate contact with the object to support manipulation. The problem is ill-posed in nature owing to the difficulties posed by complex interaction noise and the substantial domain gap across different interactions resulting from new objects, hand movements, and unseen noise patterns. We resolve the above difficulties by 1) designing a novel HOI representation that parameterizes the HOI process faithfully and can both simplify the distribution of complex HOI and foster the model generalization across different interactions (Section 3.1) and 2) devising an effective denoising scheme that can both clean complex noises through a progressive denoising strategy and generalize across different input noise patterns (Section 3.2). 3.1 GENE OH : GENERALIZED CONTACT-CENTRIC HAND-OBJECT SPATIAL AND TEMPORAL RELATIONS Designing an effective and generalizable HOI denoising model requires a serious effort in the representation design. It involves striking a balance between expressive modeling of the interaction with objects and supporting the model’s generalization to new objects and interactions. The ideal HOI representation should accurately capture the interaction process, highlight any unusual phenomena like spatial penetrations, and facilitate alignment across diverse interaction sequences. We introduce GeneOH to achieve this. It integrates the hand trajectory, hand-object spatial relations, and hand-object temporal relations to represent the HOI process faithfully. An effective normalization strategy is further introduced to enhance alignment across diverse interactions. The hand trajectory and the object trajectory are compactly represented as the trajectory of hand keypoints, denoted as \( J = \{J_k\}_{k=1}^K \), and the interaction region sequence: \( P = \{P_k\}_{k=1}^K \), in a contact-aware manner. We will then detail the design of GeneOH. **Generalized contact points.** The interaction region is established based on points sampled from the object surface close to the hand trajectory, referred to as “generalized contact points”. They are \( N_o \) points (denoted as \( P \in \mathbb{R}^{N_o \times 3} \)) sampled from object surface points, whose distance to the hand trajectory does not exceed a threshold value of \( r_c \) (set to 5mm). The sequence of these points across all frames is represented by \( P = \{P_k\}_{k=1}^K \), where \( P_k \) denotes points at frame \( k \). Each \( P_k \) is associated with a 6D pose, consisting of the object’s orientation (or the orientation of the first part for articulated objects), denoted as \( R_k \), and the center of \( P_k \), denoted as \( t_k \). **Canonicalized hand trajectories.** We include hand trajectories in our representation to effectively model hand movements. Specifically, we leverage hand keypoints to model the hand, as they offer a compact and expressive representation. We represent the hand trajectory as the sequence of 21 hand keypoints, denoted as \( J = \{J_k \in \mathbb{R}^{N_h \times 3}\}_{k=1}^K \), where \( N_h = 21 \). We further canonicalize the hand trajectory \( J \) using the poses of the generalized contact points to eliminate the influence of object poses, resulting in the canonicalized hand trajectory in GeneOH: \( \tilde{J} = \{J_k = (J_k - t_k)R_k^T\}_{k=1}^K \). **Generalized contact-centric hand-object spatial relations.** We further introduce a hand-object spatial representation in GeneOH. The representation is based on hand keypoints and generalized contact points to inherit their merits. The spatial relation centered at each generalized contact point \( o_k \in P_k \) comprises the relative offset from \( o_k \) to each hand keypoint \( h_k \in J_k \), i.e., \( \{h_k - o_k | h_k \in J_k\} \), the object point normal \( n_k \), and the object point position \( o_k \). These statistics are subsequently canonicalized using the 6D pose of the generalized contact points to encourage cross-interaction alignment. Formally, the spatial representation centered at \( o_k \) is defined as: \[ s_k^o = ((o_k - t_k)R_k^T, n_kR_k^T, \{(h_k - o_k)R_k^T | h_k \in J_k\}) \] The spatial relation \( S \) is composed of \( s_k^o \) at each generalized contact point: \( S = \{s_k^o | o_k \in P_k\}_{k=1}^K \). By encoding object normals and hand-object relative offsets, \( S \) can reveal unnatural hand-object spatial relations such as penetrations. **Generalized contact-centric hand-object temporal relations.** Considering the limitations of the above two representations in revealing temporal errors such as incorrect manipulations resulting from inconsistent hand-object motions, we further introduce hand-object temporal relations to parameterize the HOI temporal information explicitly. We again take hand keypoints \( J \) to represent hand shape and generalized contact points \( P \) for the object shape to take advantage of their good ability in supporting generalization. The temporal relations encode the relative velocity between each hand point \( o_k \) and each hand keypoint \( h_k \) at frame \( k \) (\( v_{ho}^k = v_h^k - v_o^k \)), the Euclidean distance between each pair of points (\( d_{ho}^k = \|h_k - o_k\|_2 \)), and the object velocity \( v_o^k \) in the representation, as illustrated in Figure 2. We further introduce two statistics by using the object point normal to canonicalize \( v_{ho}^k \), resulting in two normalized statistics: \( v_{ho,k,\perp}^k \), orthogonal to the object tangent plane, and \( v_{ho,k,\parallel}^k \) lying in the object’s tangent plane, and encoding them with hand-object relative distances: \[ e_{ho,k,\perp}^k = e^{-k \cdot d_{ho}^k} \|v_{ho,k,\perp}^k\|_2 \quad \text{and} \quad e_{ho,k,\parallel}^k = e^{-k \cdot d_{ho}^k} \|v_{ho,k,\parallel}^k\|_2 \] Here, \( k_s, k_a, \) and \( k_b \) are positive hyper-parameters, and the term \( e^{-k \cdot d_{ho}^k} \) is negatively related to the distance between the hand and object points. This canonicalization and encoding strategy aims to encourage the model to learn different denoising strategies for the two types of relative velocities, enhance cross-interaction generalization by factoring out object poses, and emphasize the relative movement between very close hand-object point pairs. The temporal representation \( T \) is defined by combining the above statistics of each hand-object point pair across all frames together: \[ T = \{v_o^k, \{d_{ho}^k, v_{ho}^k, e_{ho,k,\perp}^k, e_{ho,k,\parallel}^k | h_k \in J_k\} | o_k \in P_k\}_{k=1}^{K-1} \] It reveals temporal errors by encoding object velocities, hand-object distances and relative velocities. Figure 3: The progressive HOI denoising gradually cleans the input noisy trajectory through three stages. Each stage concentrates on refining the trajectory by denoising a specific part of GeneOH via a canonical denoiser through the “denoising via diffusion” strategy. The GeneOH representation. The overall representation, GeneOH, comprises the above three components, as defined formally: GeneOH = {J, S, T}. Figure 2 illustrates the design. It faithfully captures the interaction process, can reveal noise by encoding corresponding statistics, and benefits the generalization by employing carefully designed canonicalization strategies. Inspecting back into previous works, TOCH (Zhou et al., 2022) does not explicitly parameterize the hand-object temporal relations or hand shapes and does not carefully consider the spatial canonicalization to facilitate the generalization, which limits its denoising capability and may lead to the loss of high-frequency hand pose details. ManipNet (Zhang et al., 2021) does not encode temporal relations and does not incorporate contact-centric canonicalization, rendering it inadequate for capturing the interaction process and less effective for generalization purposes. 3.2 GeneOH Diffusion: Progressive HOI Denoising via Denoising Diffusion While GeneOH excels in encoding the interaction process faithfully, highlighting errors to facilitate denoising, and reducing the disparities among various interaction sequences, designing an effective denoising model is still challenged by complex interaction noise, even from a distribution unseen during training. Previous methods typically employ pattern-specific denoising models trained to map noisy data restricted to certain patterns to the clean data manifold (Zhou et al., 2022; 2021b). However, these methods are susceptible to overfitting, resulting in conceptually incorrect results when faced with interactions with unseen noise patterns, as evidenced in our experiments. Algorithm 1 Denoising via Diffusion Input: forward diffusion function Diffuse(·, t), the denoising model denoise(·, t), input noisy point x̂, diffusion steps tdiff. Output: denoised data x. 1: function \text{DENOISE}(x̂\text{init}, tdiff) 2: for t from tdiff to 1 do 3: x̂t−1 ∼ denoise(x̂t, t) 4: return x̂0 5: x̂ ← Diffuse(x̂, tdiff) 6: return x ← \text{DENOISE}(x̂, tdiff) To ease the challenge posed by novel interaction noise, we propose a new denoising paradigm that learns a canonical denoising model and leverages it for domain-generalizable denoising. It describes the mapping from noisy data at various noise scales from a whitened noise space to the data manifold. The whitened noise space is populated with noisy data samples diffused from the clean data via a diffusion process which gradually adds Gaussian noise to the data according to a variance schedule, a similar flavor to the forward diffusion process in diffusion-based generative models (Song et al., 2020; Ho et al., 2020; Rombach et al., 2022; Dhariwal & Nichol, 2021). With the canonical denoiser, we then leverage a “denoising via diffusion” strategy to handle input trajectories with various noise patterns in a generalizable manner. It first diffuses the input trajectory x̂ via the diffusion process to another sample x̂ that resides closer to the whitened noise space. Then the model projects the diffused sample x̂ to the data manifold. To balance the generalization ability of the denoising and the fidelity of the denoised result to the input, the diffused x̂ needs to be faithful to the input x̂. We then introduce a diffusion timestep tdiff that decides how many diffusion steps are added. The process is visually depicted in the right part of Figure 3. Details are outlined in Algorithm 1. We also implement the denoising model’s function and the training as those of the score functions in diffusion-based generative models. It is a multi-step stochastic denoiser that eliminates the noise of the input gradually to zero step-by-step. This way the denoiser can deal with noise at different scales flexibly and can give multiple solutions for the ill-posed ambiguous denoising problem. Based on the domain-generalizable denoising strategy, designing a single data-driven model to clean heterogeneous interaction noise in one stage is still not feasible. The interaction noise contains various kinds of noise at ununiform scales stemming from different reasons. Thus the corresponding noise-to-data mapping is very high dimensional and is very challenging to learn from limited data. A promising solution to tackle the complexity is taking a progressive approach and learning multiple specialists, each concentrating on cleaning a specific type of noisy information. However, the multi-stage formulation brings new difficulties. It necessitates careful consideration of the information to be cleaned at each stage to prevent the current stage from compromising the naturalness achieved in previous stages. Fortunately, our design of the GeneOH representation facilitates a solution to this issue. HOI information can be represented into three relatively homogeneous parts: $\mathcal{J}$, $\mathcal{S}$, and $\mathcal{T}$. Furthermore, their relations ensure the sequential refinement of the hand trajectory by denoising its $\mathcal{J}$, $\mathcal{S}$, and $\mathcal{T}$ representations across three stages can avoid the undermining problem. A formal proof of this property is provided in the Appendix A.2. **Progressive HOI denoising.** We design a three-stage denoising approach (outlined in Figure 3), each stage dedicated to cleaning one aspect of the representation: $\mathcal{J}$, $\mathcal{S}$, and $\mathcal{T}$, respectively. In each stage, a canonical denoising model is learned for the corresponding representation, and the denoising is carried out using the “denoising via diffusion” strategy. Given the input GeneOH$^{\text{input}} = \{\hat{\mathcal{J}}^{\text{input}}, \hat{\mathcal{S}}^{\text{input}}, \hat{\mathcal{T}}^{\text{input}}\}$, the first denoising stage, named **MotionDiff**, denoises the noisy canonical hand trajectory $\hat{\mathcal{J}}^{\text{input}}$ to $\mathcal{J}^{\text{stage}_1}$. One stage-denoised hand trajectory $\mathcal{J}^{\text{stage}_1}$ can be easily computed by de-canonicalizing $\mathcal{J}^{\text{stage}_1}$ using object poses. GeneOH$^{\text{input}}$ can also be updated accordingly into GeneOH$^{\text{stage}_1} = \{\mathcal{J}^{\text{stage}_1}, \mathcal{S}^{\text{stage}_1}, \hat{\mathcal{T}}^{\text{stage}_1}\}$. Then the second stage, named **SpatialDiff**, denoises the noisy spatial relation $\mathcal{S}^{\text{stage}_1}$ to $\mathcal{S}^{\text{stage}_2}$. Two stages-denoised hand trajectory $\mathcal{J}^{\text{stage}_2}$ can be transformed from the hand-object relative offsets in $\mathcal{S}^{\text{stage}_2}$: $\mathcal{J}^{\text{stage}_2} = \text{Average}\{(h_k - o_k) + o_k | o_k \in \mathcal{P}_k\}$. Following this, GeneOH$^{\text{stage}_2}$ will be updated to GeneOH$^{\text{stage}_2} = \{\mathcal{J}^{\text{stage}_2}, \mathcal{S}^{\text{stage}_2}, \hat{\mathcal{T}}^{\text{stage}_2}\}$. Finally the last stage, named **TemporalDiff**, denoises $\hat{\mathcal{T}}^{\text{stage}_2}$ to $\mathcal{T}^{\text{stage}_3}$. Since temporal information such as relative velocities is redundantly encoded in $\mathcal{T}$, we compute the three stages-denoised hand trajectory $\mathcal{J}^{\text{stage}_3}$ by optimizing $\mathcal{J}^{\text{stage}_2}$ so that its induced temporal representation aligns with $\mathcal{T}^{\text{stage}_3}$. And we take $\mathcal{J}^{\text{stage}_3}$ as the final denoising output, denoted as $\mathcal{J}$. Each stage would not undermine the naturalness achieved after the previous stages, as proved in the Appendix A.2. **Fitting for a hand mesh trajectory.** With the denoised trajectory $\mathcal{J}$ and the object trajectory, a parameterized hand sequence represented via MANO parameters $\{r_k, t_k, \beta_k, \theta_k\}_{k=1}^{K}$ are optimized to fit $\mathcal{J}$ well. Details are illustrated in the Appendix A.3. ## 4 Experiments We conduct extensive experiments to demonstrate the effectiveness of our method. We train all models on the same training dataset and introduce four test sets with different levels of domain shift to assess their denoising ability and the generalization ability (see Section 4.2). Moreover, we demonstrate the ability of our denoising method to produce multiple reasonable solutions for a single input in Section 4.3. At last, we show various applications that we can support (Section 4.4). Another series of experiments using a different training set is presented in the Appendix B.1. ### 4.1 Experimental Settings **Training datasets.** All models are trained on the GRAB dataset (Taheri et al., 2020). We follow the cross-object splitting strategy used in TOCH (Zhou et al., 2022) and train models on the training set. Our denoising model only requires ground-truth sequences for training. For those where the noisy counterparts are demanded, we perturb each sequence by adding Gaussian noise on the hand MANO translation, rotation, and pose parameters with standard deviations set to 0.01, 0.1, 0.5 respectively. **Evaluation datasets.** We evaluate our model and baselines on four distinct test sets, namely GRAB test set with Gaussian noise, GRAB (Beta) test set with noise sampled from a Beta distribution ($B(8, 2)$), HOI4D dataset (Liu et al., 2022) with real noise patterns resulting from depth sensing errors and inaccurate pose estimation algorithms, and ARCTIC dataset (Fan et al., 2023) with Gaussian noise but containing challenging bimanual and dynamic interactions with changing contacts. Noisy trajectories with synthetic noise are created by adding noise sampled from corresponding distributions to the MANO parameters. **Metrics.** We introduce two sets of evaluation metrics. The first set focuses on assessing the model’s ability to recover GT trajectories from noisy inputs following previous works (Zhou et al., 2022), including Mean Per-Joint/Vertex Position Error (MPJPE/MPVPE), measuring the average distance between the denoised hand joints or vertices and the corresponding GT positions and Contact IoU (C-IoU) assessing the similarity between the contact map induced by denoised trajectory and the GT. The second set quantifies the quality of denoised results, including Solid Intersection Volume (IV) and Penetration Depth, measuring penetrations, Proximity Error, evaluating the difference of the hand-object proximity between the denoised trajectory and the GT, and HO Motion Consistency, assessing the hand-object motion consistency. Detailed calculations are presented in the Appendix C.2. Figure 4: Qualitative comparisons. Please refer to our website and video for animated results. Baselines. We compare our model with the prior art on the HOI denoising problem, TOCH (Zhou et al., 2022). A variant named “TOCH (w/ MixStyle)” is further created by combining TOCH with a general domain generalization method MixStyle (Zhou et al., 2021a). Another variant, “TOCH (w/ Aug.)”, where TOCH is trained on the training sets of the GRAB and GRAB (Beta), is further introduced to enhance its robustness towards unseen noise patterns. Evaluation settings. When evaluating our model, we select the trajectory that is closest to the input noisy trajectory from 100 randomly sampled denoised trajectories using seeds from 0 to 99. For deterministic denoising models, we report the performance on a single run. Since our model can give multiple solutions for a single input, we additionally report the performance of our model in the form of average with standard deviations in the Appendix on the second metric set measuring quality. 4.2 HOI Denoising We evaluated our model and compared it with previous works on four test sets: GRAB, GRAB (Beta), HOI4D, and ARCTIC. In the GRAB test set, all objects were unseen during training, resulting in a shift in the interaction distribution. In the GRAB (Beta) test set, the object shapes, interaction patterns, and noise patterns differ from those in the training set. The HOI4D dataset includes interaction sequences with novel objects and unobserved interactions, along with real noise... Table 1: Quantitative evaluations and comparisons to baselines. **Bold red** numbers for best values and *italic blue* values for the second best-performed ones. “GT” stands for “Ground-Truth”. | Dataset | Method | MPIPE (mm, ↓) | MPVPE (mm, ↓) | C-IoU (%) ↑ | IV (cm³, ↓) | Penetration Depth (mm, ↓) | Proximity Error (mm, ↓) | HO Motion Consistency (mm², ↓) | |---------|--------------|---------------|---------------|-------------|-------------|---------------------------|------------------------|-------------------------------| | | GT | - | - | - | 0.50 | - | - | 0.51 | | | Input | 23.16 | 22.78 | 1.01 | 4.48 | 5.25 | 13.29 | 881.23 | | GRAB | TOCH | 12.38 | 12.14 | 23.31 | 2.09 | 2.17 | 3.12 | 20.37 | | | TOCH (w/ MixStyle) | 13.36 | 13.03 | 23.70 | 2.28 | 2.62 | 3.10 | 21.29 | | | TOCH (w/ Aug.) | 12.23 | 11.89 | 22.71 | 1.94 | 2.04 | 3.16 | 22.58 | | | Ours | 9.28 | 9.22 | 25.27 | 1.23 | 1.74 | 2.53 | 0.57 | | | Input | 17.65 | 17.40 | 13.21 | 2.19 | 4.77 | 5.83 | 27.58 | | GRAB (Beta) | TOCH | 24.10 | 22.90 | 16.32 | 2.33 | 2.77 | 5.60 | 25.05 | | | TOCH (w/ MixStyle) | 22.79 | 21.19 | 16.28 | 2.01 | 2.63 | 4.65 | 17.37 | | | TOCH (w/ Aug.) | 11.65 | 10.47 | 24.81 | 1.52 | 1.86 | 3.07 | 13.09 | | | Ours | 9.09 | 8.98 | 26.76 | 1.19 | 1.69 | 2.74 | 0.52 | | | Input | - | - | 2.26 | 2.47 | - | - | 46.45 | | HOI4D | TOCH | - | - | - | 4.09 | 4.46 | - | 35.93 | | | TOCH (w/ MixStyle) | - | - | - | 4.31 | 4.96 | - | 25.67 | | | TOCH (w/ Aug.) | - | - | - | 4.20 | 4.51 | - | 25.85 | | | Ours | - | - | 1.99 | 2.15 | - | - | 9.81 | | | GT | - | - | - | 0.33 | 0.92 | 0 | 0.41 | | | Input | 25.51 | 24.84 | 1.68 | 2.28 | 4.89 | 15.21 | 931.69 | | ARCTIC | TOCH | 14.34 | 14.07 | 20.32 | 1.84 | 2.01 | 4.31 | 18.50 | | | TOCH (w/ MixStyle) | 13.82 | 13.58 | 21.70 | 1.92 | 2.13 | 4.25 | 18.02 | | | TOCH (w/ Aug.) | 14.18 | 13.90 | 20.10 | 1.75 | 1.98 | 5.64 | 22.57 | | | Ours | 11.57 | 11.09 | 23.49 | 1.35 | 1.93 | 2.71 | 0.92 | caused by inaccurate sensing and vision estimations. The ARCTIC dataset contains challenging bimanual dexterous HOI sequences with dynamic contacts. Table 1 and Figure 4, 5 summarize the quantitative results and can demonstrate the superiority of our method to recover GT sequences and produce high-quality results compared to previous baseline methods. We include more results in the Appendix B.1, our website and video. Performance on challenging noisy interactions. As shown in Figure 4, the perturbed noisy trajectories exhibit obvious problems such as unnatural hand poses, large and difficult penetrations such as penetrating the thin mug handle, and unrealistic manipulations caused by incorrect contacts and inconsistent hand-object motions. Our method can produce visually appealing interaction sequences from noisy inputs effectively. Besides, we do not have difficulty in handling difficult shapes such as the mug handle and scissor rings which are very easy to penetrate. However, TOCH cannot perform well. Its results still exhibit obvious penetrations (the last frame) and hand motions that are insufficient to manipulate the mug. Furthermore, we are not challenged by difficult and dynamic motions with changing contacts, as demonstrated by results on the ARCTIC dataset. Results on noisy interactions with unseen noise patterns. In Figure 4, we demonstrate our method’s robustness against new noise patterns, including previously unseen synthetic noise and novel real noise. Our approach effectively cleans such noise, producing visually appealing and motion-aware results with accurate contacts. In contrast, TOCH fails in these scenarios, as it exhibits obvious penetrations (as seen in the middle example) and results in stiff hand trajectories without proper contacts to manipulate the object (as seen in the rightmost example). 4.3 Stochastic HOI Denoising Figure 5 illustrates our ability to provide multiple plausible denoised results for a single noisy input. Notably, we observe discrete manipulation modes among these results. For instance, in the leftmost example of Figure 5, our model generates different hand poses to address the unnatural phenomenon in the second frame, where two fingers penetrate through the camera. Similarly, in the rightmost example, our results offer two distinct ways to rotate the scissor for a certain angle. 4.4 Applications Cleaning hand trajectory estimations. As a denoising model, our approach can effectively refine hand trajectory estimations derived from image sequence observations. Figure 6 provides examples of applying our model to estimations obtained from ArcticNet-LSTM (Fan et al., 2023). Refining noisy retargeted hand motions. In the right part of Figure 6, we showcase the application of our denoising model in cleaning noisy retargeted hand trajectories. Our model excels at resolving penetrations present in the sequence resulting from direct retargeting. In contrast, TOCH’s result still suffers from noticeable penetrations. Figure 6: Applications on refining noisy hand trajectories estimated from videos (left) and cleaning retargeted hand trajectories (right). 5 ABLATION STUDY Generalized contact-centric parameterizations. GeneOH leverages generalized contact points to normalize the hand-object relations. To assess the effectiveness of this design, we create an ablated model named “Ours (w/o Canon.)”, which uses points sampled from the entire object surface for parameterizing. From Table 2, we can observe that our design on parameterizing around the interaction region can successfully improve the model’s generalization ability towards unseen interactions. Denoising via diffusion. To further investigate the impact of the “denoising via diffusion” strategy on enhancing the model’s generalization ability, we ablate it by replacing the denoising model with an autoencoder structure. The results are summarized in Table 2. Besides, the comparisons between “Ours (w/o Diffusion)” and TOCH highlight the superiority of our representation GeneOH as well. Hand-object spatial and temporal denoising. We propose a progressive denoising strategy composed of three stages to clean the complex interaction noise. This multi-stage approach is crucial, as a single denoising stage would fail to produce reasonable results in the presence of complex interaction noise. To validate the effectiveness of the stage-wise denoising, we created two ablated versions: a) “Ours (w/o TemporalDiff)” by removing the temporal denoising module, and b) “Ours (w/o SpatialDiff)” by removing both the temporal and spatial denoising modules. Figure 7 and Table 2 demonstrate their effectiveness in removing unnatural hand-object penetrations and enforcing consistent hand-object motions. More quantitative and qualitative results for ablation studies are included in the Appendix B.2. 6 CONCLUSION AND LIMITATIONS In this work, we propose GeneOH Diffusion to tackle the generalizable HOI denoising problem. We resolve the challenge by 1) designing an informative HOI representation that is friendly for generalization, and 2) learning a canonical denoising model for domain-generalizable denoising. Experiments demonstrate our high denoising capability and generalization ability. Limitations. The main limitation lies in the assumption of accurate object pose trajectories. It may not hold if the HOI sequences are estimated from in-the-wild videos. Refining object poses and hand poses at the same time is a valuable and practical research direction. | Method | IV (cm², ±) | Penetration Depth (mm, ±) | HO Motion Consistency (mm², ±) | |-------------------------|-------------|---------------------------|-------------------------------| | Input | 2.26 | 2.47 | 46.35 | | Ours (w/o SpatialDiff) | 2.94 | 3.41 | 31.67 | | Ours (w/o TemporalDiff) | 1.72 | 1.90 | 34.25 | | Ours (w/o Diffusion) | 3.16 | 3.83 | 18.65 | | Ours (w/o Canon.) | 2.36 | 3.57 | 13.26 | | Ours | 1.99 | 2.15 | 9.81 | Figure 7: Effectiveness of the SpatialDiff and TemporalDiff stages. REFERENCES Kfir Aberman, Rundi Wu, Dani Lischinski, Baoquan Chen, and Daniel Cohen-Or. Learning character-agnostic motion for motion retargeting in 2d. *arXiv preprint arXiv:1905.01680*, 2019. Gilles Blanchard, Gyemin Lee, and Clayton Scott. Generalizing from several related classification tasks to a new unlabeled sample. *Advances in neural information processing systems*, 24, 2011. Hyungjin Chung, Byeongsu Sim, Dohoон Ryu, and Jong Chul Ye. Improving diffusion models for inverse problems using manifold constraints. *arXiv preprint arXiv:2206.00941*, 2022. Guillaume Dewaele, Frédéric Devernay, and Radu Horaud. Hand motion from 3d point trajectories and a smooth surface model. In *European Conference on Computer Vision*, pp. 495–507. Springer, 2004. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. *Advances in Neural Information Processing Systems*, 34:8780–8794, 2021. Qi Dou, Daniel Coelho de Castro, Konstantinos Kamnitsas, and Ben Glocker. Domain generalization via model-agnostic learning of semantic features. *Advances in Neural Information Processing Systems*, 32, 2019. Zicong Fan, Omid Taheri, Dimitrios Tzionas, Muhammed Kocabas, Manuel Kaufmann, Michael J. Black, and Otmar Hilliges. ARCTIC: A dataset for dexterous bimanual hand-object manipulation. In *Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2023. Anindita Ghosh, Rishabh Dabral, Vladislav Golyanik, Christian Theobalt, and Philipp Slusallek. Imos: Intent-driven full-body motion synthesis for human-object interactions. In *Computer Graphics Forum*, volume 42, pp. 1–12. Wiley Online Library, 2023. Patrick Grady, Chengcheng Tang, Christopher D Twigg, Minh Vo, Samarth Brahmbhatt, and Charles C Kemp. Contactopt: Optimizing contact to improve grasps. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 1471–1481, 2021. Vladimir Guzov, Torsten Sattler, and Gerard Pons-Moll. Visually plausible human-object interaction capture from wearable sensors. *arXiv preprint arXiv:2205.02830*, 2022. Georg Hackenberg, Rod McCall, and Wolfgang Broll. Lightweight palm and finger tracking for real-time 3d gesture control. In *2011 IEEE Virtual Reality Conference*, pp. 19–26. IEEE, 2011. Shreyas Hampali, Mahdi Rad, Markus Oberweger, and Vincent Lepetit. Honnotate: A method for 3d annotation of hand and object poses. In *CVPR*, 2020. Chris Hecker, Bernd Raabe, Ryan W Enslow, John DeWeese, Jordan Maynard, and Kees van Prooijen. Real-time motion retargeting to highly varied user-created morphologies. *ACM Transactions on Graphics (TOG)*, 27(3):1–11, 2008. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. *Advances in Neural Information Processing Systems*, 33:6840–6851, 2020. Siyuan Huang, Zan Wang, Puhao Li, Baoxiong Jia, Tengyu Liu, Yixin Zhu, Wei Liang, and Song-Chun Zhu. Diffusion-based generation, optimization, and planning in 3d scenes. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 16750–16761, 2023. Junguang Jiang, Yang Shu, Jianmin Wang, and Mingsheng Long. Transferability in deep learning: A survey. *arXiv preprint arXiv:2201.05867*, 2022. Hirokazu Kato, Mark Billinghurst, Ivan Poupyrev, Kenji Imamoto, and Keihachiro Tachibana. Virtual object manipulation on a table-top ar environment. In *Proceedings IEEE and ACM International Symposium on Augmented Reality (ISAR 2000)*, pp. 111–119. Ieee, 2000.
otHZ8JAIgh
Since both PID and PIB rely on sampling from distribution, it does seem that the performance will indeed by affected by which samples are chosen or how many of them are sampled. The discussion around this point needs to be made explicit.
Prototypical Information Bottlenecking and Disentangling for Multimodal Cancer Survival Prediction Yilan Zhang \(^1\)\(^2\)\(^1\), Yingxue Xu \(^1\)\(^1\), Jianqi Chen \(^2\), Fengying Xie \(^*\)\(^2\), Hao Chen \(^*\)\(^1\) \(^1\)The Hong Kong University of Science and Technology, \(^2\)Beihang University yxueb@connect.ust.hk, jhc@cse.ust.hk \{zhangyilan, cjqchenjianqi, xfy\_73\}@buaa.edu.cn Abstract Multimodal learning significantly benefits cancer survival prediction, especially the integration of pathological images and genomic data. Despite advantages of multimodal learning for cancer survival prediction, massive redundancy in multimodal data prevents it from extracting discriminative and compact information: (1) An extensive amount of intra-modal task-unrelated information blurs discriminability, especially for gigapixel whole slide images (WSIs) with many patches in pathology and thousands of pathways in genomic data, leading to an “intra-modal redundancy” issue. (2) Duplicated information among modalities dominates the representation of multimodal data, which makes modality-specific information prone to being ignored, resulting in an “inter-modal redundancy” issue. To address these, we propose a new framework, Prototypical Information Bottlenecking and Disentangling (PIBD), consisting of Prototypical Information Bottleneck (PIB) module for intra-modal redundancy and Prototypical Information Disentanglement (PID) module for inter-modal redundancy. Specifically, a variant of information bottleneck, PIB, is proposed to model prototypes approximating a bunch of instances for different risk levels, which can be used for selection of discriminative instances within modality. PID module decouples entangled multimodal data into compact distinct components: modality-common and modality-specific knowledge, under the guidance of the joint prototypical distribution. Extensive experiments on five cancer benchmark datasets demonstrated our superiority over other methods. The code is released.\footnote{https://github.com/zylbuaa/PIBD.git} 1 Introduction Cancer survival analysis \cite{cox1975regression, jenkins2005prognostic, salerno2023multimodal} aims to estimate the death risk of patients for prognosis, in which multimodal learning by integrating both histological information and genomic molecular profiles can benefit the prognosis of a majority of cancer types \cite{chen2020multimodal, chen2022multimodal, chen2021multimodal, jaume2023multimodal, xu2023multimodal}. These modalities offer diverse perspectives for patient stratification and informing therapeutic decision-making \cite{zuo2022multimodal}. For example, histological images give visual phenotypic information about tumor microenvironment, e.g., the organization of cells \cite{jackson2020multimodal}, for different grading of cancer, while genomics data provides global landscapes \cite{gyorffy2021integrating} for various molecular subtyping of cancer. They collaboratively contribute to different survival outcomes. Nevertheless, a large quantity of redundancy in multimodal data poses some significant challenges to effective fusion. The primary question at hand is: How can we capture the discriminative information from single modality by eliminating its redundancy, referred as “intra-modal redundancy” issue? The label for a WSI consisting of numerous patches is typically provided at the WSI level, leading to weak supervision for survival prediction. In the absence of precise annotations, such as patch-wise labeling for cancerous regions in WSIs, both task-related and irrelevant information become intermingled in the model’s input, resulting in information redundancy \cite{hosseini2023prototypical}. Specifically, the region of interest, e.g., the tumor cells highly related to risk assessment, only occupies a small... portion of gigapixel WSIs with high resolutions of about $100,000 \times 100,000$ pixels (Zhu et al., 2017). For this fine-grained visual recognition, although certain multiple-instance learning (MIL) (Ilse et al., 2018; Li et al., 2021; Yao et al., 2020) have provided some promising solutions, they do not enforce constraints to remove redundant information, thus struggling to obtain discriminative representations. A similar redundancy issue emerges in genomic modality. Research (Jaume et al., 2023; Chen et al., 2021) indicates that biological pathway-based gene groups, characterized by known interactions in unique cellular functions, offer more semantic correspondence with pathology features. However, these pathways can yield hundreds to thousands of groups, and only a few specific pathways exhibit a strong correlation with patient prognosis (e.g., immune-related pathways are significant for bladder cancer prognosis prediction (Jiang et al., 2021a)). Another concern is: How can we capture compact yet comprehensive knowledge from the dominant overlapping information in multimodal data, referred as “inter-modal redundancy” issue? The redundancy stemming from this duplicated information can complicate the knowledge extraction. Therefore, extracting independent factors by disentangling can enhance the feature effectiveness while discarding superfluous information. The knowledge (Liang et al., 2023) can be split into distinct components: modality-specific knowledge and modality-common knowledge. The former contains information unique to a single modality, while the latter encapsulates common information and exhibits consistency across modalities. To obtain effective knowledge from multimodal redundancy, existing efforts (Chen et al., 2021; Xu & Chen, 2023) focus on integrating common information, emphasizing the inherent consistency through alignment. However, common information often dominates aligning and integrating multimodal information, leading to the suppression of modality-specific information, thereby disregarding the wealth of distinctive perspectives. In this work, we propose a new multimodal survival prediction framework, Prototypical Information Bottlenecking and Disentangling (PIBD), consisting of Prototypical Information Bottleneck (PIB) module for “intra-modal redundancy” and Prototypical Information Disentanglement (PID) module for “inter-modal redundancy”. First, Information Bottleneck (IB) provides a promising solution to compress unnecessary redundancy from itself while maximizing discriminative information about task targets. However, IB may suffer from the high-dimensional computational challenges posed by massive patches of a gigapixel WSI and hundreds of pathways. Instead, we propose a new IB variant, PIB, that models prototypes approximating a bunch of instances (e.g., patches of pathology or pathways of genomics) for different risk levels, which can guide selection of discriminative instances within a modality. Secondly, PID removes inter-modal redundancy by comprehensively decomposing entangled multimodal features into ideally independent modality-common and modality-specific knowledge. To do this, we reuse the joint prototypical distributions modeled by aforementioned PIB to guide the extraction of common knowledge. Simultaneously, we enforce the model to learn knowledge different from the joint prototypical distribution, which is considered as guidance for capturing modality-specific knowledge as well. It is worth noting that the proposed method can be extended into more multimodal problems with modalities of bag structure. The contributions are as follows: (1) Inspired by information theory for mitigating redundancy, we propose a new multimodal cancer survival framework, PIBD, addressing both “intra-modal” and “inter-modal” redundancy challenges. (2) We design a new IB variant, PIB, that models prototypes for selecting discriminative information to reduce intra-modal redundancy, while PID addresses inter-modal redundancy by decoupling multimodal data into distinct components with the guidance of joint prototypical distribution. (3) Extensive experiments on five cancer benchmark datasets demonstrate the superiority of our approach over state-of-the-art methods. 2 RELATED WORKS 2.1 SURVIVAL PREDICTION FROM SINGLE MODALITY Predicting survival risk is vital for understanding cancer progression. Recent advances in digital pathology (Evans et al., 2018) and high-throughput sequencing (Christinat & Krek, 2015) technologies have led to vibrant research in single-modal survival prediction using WSIs and genomics data, respectively. To handle gigapixel images, multiple-instance learning (MIL) defines a “bag” as a collection of multiple instances (i.e., image patches) and provides effective ways to learn global representations for WSIs. MIL methods focus on aggregations of instance-level predictions (Campannella et al., 2019; Feng & Zhou, 2017; Hou et al., 2016) or features (Ilse et al., 2018). For the former, bag predictions can be simply fused by pooling the probability values of instances. While the latter employs various strategies for getting the global features, e.g., clustering embeddings (Yao et al., modeling patch correlations with graphs (Guan et al., 2022), assigning attention weights (Ilse et al., 2018; Li et al., 2021), and learning long-range interactions by transformers (Shao et al., 2021). Furthermore, genomics data provides crucial molecular information essential for survival prediction as well. Typically represented as $1 \times 1$ measurements, genomic features can be extracted using simple neural networks, e.g., MLP (Haykin, 1998) and SNN (Klambauer et al., 2017). Although these single-modality-based methods achieved remarkable improvements in feature extraction, they do not provide constraints on removing redundant information to capture the discriminative features. ### 2.2 Survival Prediction from Multiple Modalities In clinical practice, patients are usually collected with comprehensive multimodal data such as genomics (Klambauer et al., 2017), pathology (Zhu et al., 2017; Liu et al., 2022; Chen et al., 2022a), radiology (Jiang et al., 2021b; Yao et al., 2021), etc. for diagnosis and prognosis, thus learning multimodal interactions (Zhang et al., 2023) becomes an important motivation for many studies. These methods are broadly categorized into tensor-based and attention-based fusion techniques (Zhang et al., 2020). Some tensor-based fusions, like concatenation (Mobadersany et al., 2018) and weighted sum (Huang et al., 2020), are simple with few parameters. Alternatively, other tensor-based fusion uses bilinear pooling to create a joint representation space by computing the outer product of features, e.g., Kronecker product (Wang et al., 2021), factorized bilinear pooling (Li et al., 2022). However, these methods are typically used in early or late fusion stages, making the inter-modal interactions (Chen et al., 2022b) prone to be neglected. Recently, attention-based fusion methods focus on learning cross-modal correlations through co-attention mechanisms (Chen et al., 2021; Zhou & Chen, 2023). For instance, MCAT (Chen et al., 2021) proposed a gene-guided co-attention, HMCAT (Li et al., 2023b) designed a radiology-guided co-attention, MOTCat (Xu & Chen, 2023) introduced the optimal transport (OT) to model the global structure consistency, and SurvPath (Jaume et al., 2023) utilized the cross-attention to model dense interactions between pathways and histologic patches. Although some approaches can partially achieve alleviating redundancy by alignment, they are prone to lose modality-specific information. ### 2.3 Multimodal Learning with Information Theory Recently, information theory has attracted increasing attention within the multimodal learning community due to its ability to provide measures for quantifying information (Dai et al., 2023; Liang et al., 2023; Hjelm et al., 2018). Specifically, approaches based on the information bottleneck (IB) principle (Tishby et al., 2000; Alemi et al., 2016) have emerged as effective strategies for compressing raw information while retaining task-relevant knowledge, which has found utility across multi-view (Federici et al., 2020; Lee & Van der Schaar, 2021) and multi-modal learning (Mai et al., 2022). Additionally, another kind of method centered on information disentanglement has been harnessed to extract targeted knowledge (Sanchez et al., 2020; Cheng et al., 2022; Chen et al., 2023), facilitating the learning of more compact representations. We introduce this direction into multimodal cancer survival analysis for the first time, and inspired by information theory for mitigating redundancy, we propose a new framework PIBD that provides an information perspective solution to address the massive redundancy issues in multimodal data. ### 3 Method #### 3.1 Overall Framework and Problem Formulation Given the $i$-th patient multimodal data including pathology data $x_h^{(i)}$ and genomic data $x_g^{(i)}$, we aim to predict patients’ survival outcome by estimating a hazard function $f_{\text{hazard}}^{(i)}(t)$ that represents the risk probability of death at the time point $t$. Figure 1 displays the overall framework of our PIBD. We start with extracting unimodal representations for pathology and genomics data. Following the common setting for pathological WSIs and genomic pathways in previous works (Chen et al., 2021; Jaume et al., 2023), we formulate $x_h^{(i)}$ and $x_g^{(i)}$ as the “bag” of instances based on multiple instance learning (MIL) for the $i$-th patient, denoted as $x_h^{(i)} = \{x_{h,j}^{(i)} \in \mathbb{R}^d\}_{j=1}^{M_h}$ and $x_g^{(i)} = \{x_{g,j}^{(i)} \in \mathbb{R}^d\}_{j=1}^{M_g}$, respectively, where $M_h$ is the patch numbers of a WSI and $M_g$ is the number of biological pathways. To address “intra-modal redundancy,” we propose Prototypical Information Bottleneck (PIB), detailed in Section 3.2, to select discriminative instances for each modality. Subsequently, to reduce “inter-modal redundancy”, we propose Prototypical Information Disentanglement (PID) explained in Section 3.3. PID decomposes multimodal data into independent modality-common representation $C$ and modality-specific representations denoted as $S_h$ and $S_g$ for histological Figure 1: Framework of PIBD. Patient data from pathology and genomics are initially structured into bags. The Prototypical Information Bottleneck (PIB) selects discriminative features to reduce “intra-modal redundancy”. Subsequently, the Prototypical Information Disentanglement (PID) module decouples the specific and common information to tackle “inter-modal redundancy”. Survival prediction estimates the risk probability of an outcome event before a specific time. However, the outcome is not always observed, resulting in right-censored data. We denote \( c \in \{0, 1\} \) for censorship status (\( c = 0 \) means observed deaths, \( c = 1 \) means unknown outcomes), and discrete survival time \( t \in \{1, 2, ..., N_t\} \) corresponding to a specific risk band. For a final multimodal feature \( H^{(i)} \) obtained from the pathology-genomics pairs \((x_{h}^{(i)}, x_{g}^{(i)}, t^{(i)}, c^{(i)})\) of the \( i \)-th patient, we use NLL loss \([Zadeh & Schmid, 2020]\) as survival loss function for survival prediction, following previous works \([Chen et al., 2021; Xu & Chen, 2023]\): \[ L_{\text{surv}}(\{H^{(i)}, t^{(i)}, c^{(i)}\}_{i=1}^{N_D}) = -\sum_{i=1}^{N_D} c^{(i)} \log(f_{\text{surv}}^{(i)}(t|H^{(i)})) + (1 - c^{(i)}) \log(f_{\text{surv}}^{(i)}(t-1|H^{(i)})) \\ + (1 - c^{(i)}) \log(f_{\text{hazard}}^{(i)}(t|H^{(i)})) \] where \( N_D \) is the number of samples in the training sets, \( f_{\text{hazard}}^{(i)}(y|H^{(i)}) = P(T = t|T \geq t, H^{(i)}) \) is the hazard function representing the death probability, and \( f_{\text{surv}}^{(i)}(t|H^{(i)}) = \prod_{k=1}^{t}(1 - f_{\text{hazard}}^{(i)}(k|H^{(i)})) \) is the survival function viewed as survival probability up to time point \( t \). To simplify, we assume \( y \) represents patient labels \((t, c)\), resulting in \( 2N_t \) labels. 3.2 Prototypical Information Bottleneck To tackle the “intra-modality redundancy”, we introduce the information bottleneck and propose a new variant called Prototypical Information Bottleneck (PIB). Preliminary of Information Bottleneck. The IB introduces a new representation variable \( Z \) that is maximally expressive about the target \( Y \), while compressing the original information from the input \( X \). Thus, the objective function to be maximized is given in \([Tishby et al., 2000]\) as: \[ R_{IB} = I(Z, Y) - \beta I(Z, X) \] where \( I(\cdot, \cdot) \) represents the mutual information (MI) that measures the dependence between two variables. The hyperparameter \( \beta \geq 0 \) acts as a Lagrange multiplier, controlling the trade-off where higher \( \beta \) values lead to more compressed representations. However, the computation of MI is intractable, VIB \([Alemi et al., 2016]\) transformed Eq.(2) into maximizing its approximation of a variational lower bound. By inverting the objective function of the variational lower bound, it tries to minimize the loss function (Derivation can be found in Appendix B.2.1): \[ J_{IB} = \frac{1}{N} \sum_{n=1}^{N} \mathbb{E}_{z \sim p(z|x_n)}[-\log q_\theta(y_n|z)] + \beta KL[p(z|x_n), r(z)] \] where \( N \) denotes the sample size, \( q_\theta(y|z) \) is a variational approximation of the intractable likelihood \( p(y|z) \), \( p(z|x) \) is the posterior distribution over \( z \), and \( r(z) \) approximates the prior probability \( p(z) \). In practice, \( r(z) \) is commonly assumed as a spherical Gaussian \([Alemi et al., 2016]\). And the posterior distribution \( p(z|x) \) can be variationally approximated as: \[ p(z|x) \approx q_\theta(z|x) = \mathcal{N}(z; f^{\mu}_E(x), f^{\Sigma}_E(x)) \] (4) where \( f_E \) is an MLP encoder that predicts both the mean \( \mu \) and covariance matrix \( \Sigma \). **Prototypical Information Bottleneck.** IB seems to provide a hopeful solution to reduce intra-modal redundancy. However, in our task, the modality data \( x \) is organized as a “bag” containing numerous instances. To learn a compact bag via IB, one potential solution is to directly employ the variational approximation \( q_\theta(z|x) \) of Eq.(4) in VIB to learn a representation for each instance \( x \in x \) in the bag. However, the drawbacks of this solution are two-fold. First, it is challenging to derive the overall distribution of the entire bag \( p(z|x) \) for a bag \( x \) based on such a large number of individual instance distributions, leading to a high-dimensional computational challenge. That is, the posterior distribution \( p(z|x) \) with respect to high-dimensional \( x \) of the second term in Eq.(3) is intractable. Second, since the distribution of each instance is individually learned, it is difficult to capture bag-level information for representing a compact bag. Therefore, we propose **Prototypical Information Bottleneck (PIB)** to directly approximate bag-level distribution \( p(z|x) \) with a parametric distribution \( p(\hat{z}) \) represented by a group of prototypes, denoted as \( P = \{ \mathcal{N}(\hat{z}; \mu_y, \Sigma_y) \}_{y=1}^{2N_t} \) (including scenarios with censored and uncensored data). To capture discriminative information about task target, each prototype is supposed to represent a conditional probability distribution \( p(\hat{z}|y) = \mathcal{N}(\hat{z}; \mu_y, \Sigma_y) \) for its corresponding risk band \( y \). Then, instances \( z \) of a bag are expected to approach \( \hat{z} \) with the same label \( y \). Hence, the objective of variational approximation in Eq.(4) should become: \[ p(z|x) = p(z|x, y) \approx p(\hat{z}|y) \] (5) To achieve this objective, we maximize the similarity between \( p(\hat{z}) \) and spatial distributions of latent features \( z = f_E(x) \) for a bunch of instances, where an MLP is utilized as a representation encoder \( f_E(\cdot) \) to map the input \( x \) to latent features \( z \). As a result, we just need to optimize the parametric prototypes \( \hat{z} \) and \( f_E(\cdot) \) for a bag \( x \), instead of modeling \( p(z|x) \) for each instance of the bag. In detail, to align the distribution of latent features \( z \) and parametric prototypes \( \hat{z} \), we first sample some features from various prototypes via Monte Carlo sampling (to simplify the mathematical notation, we assume sampling once from each prototype). Then, we attempt to maximize the similarities between positive prototypes \( \hat{z}_+ \) (with true label) and the most related instances, while minimizing these instances with other negative prototypes \( \hat{z}_- \). For example, given the \( i \)-th patient data, we have the bag features \( z^{(i)} = f_E(x^{(i)}) = \{ z_m^{(i)} \}_{m=1}^{M} \) and the features \( \hat{z}^{(i)} = \{ \hat{z}_n^{(i)} \}_{n=1}^{2N_t} \) sampled from prototypes, where \( M \) is the number of instances in a bag \( x^{(i)} \), \( 2N_t \) is the number of prototypes. Then, we measure the similarity between each prototype \( \hat{z}_n^{(i)} \) and bag \( z^{(i)} \) as: \[ \text{Sim}(\hat{z}_n^{(i)}, z^{(i)}) = \frac{1}{M} \sum_{m=1}^{M} d(\hat{z}_n^{(i)}, z_m^{(i)}) \] (6) where \( d(\cdot) \) can be any similarity measure and we use cosine in our experiments. To eliminate redundant instances unrelated to risk prediction, we select a portion of instances with higher similarity scores in a bag, while the discarded instances do not contribute to the learning process. During training, since we have access to the true label, the objective of approximating \( p(z|x, y) \) with prototypes \( p(\hat{z}|y) \) in Eq.(5) can be achieved by gathering these most related instances closer to positive (+) prototypes while pushing them away from negative (−) ones, formulated as: \[ L_{pro} = \frac{1}{N_D} \sum_{i=1}^{N_D} -\text{Sim}(\hat{z}_+^{(i)}, \hat{z}_+^{(i)}) + \frac{1}{2N_t - 1} \sum_{n=1}^{2N_t - 1} \text{Sim}(\hat{z}_-^{(i)}, \hat{z}_-^{(i)}) \] (7) where \( \hat{z}_n^{(i)} = \{ z_j^{(i)} : \forall 1 \leq j \leq M_{Irr}, d(\hat{z}_n^{(i)}, z_j^{(i)}) \geq d(\hat{z}_n^{(i)}, z_{j+1}^{(i)}) \} \) represents the retained instances containing task-related discriminative information with higher similarities. The retained number \( M_{Irr} \) is determined by the hyperparameter \( Irr \), the information retention rate (Irr), which controls the proportion of redundancy removal achieved by prototypes. To review the objective of IB, we substitute the prototypes \( \hat{Z} \) into the IB objective function in Eq.(2). After getting the approximation \( p(\hat{z}|y) \) for \( p(z|x, y) \) or \( p(z|x) \) in Eq.(5), we can conduct a similar derivation like from Eq.(2) to Eq.(3) (Details can be found in Appendix B.2.2), to obtain the objective loss function of PIB to be minimized as follows: \[ J_{PIB} = \frac{1}{2N_t} \sum_{n=1}^{2N_t} \mathbb{E}_{\hat{z} \sim p(\hat{z}|y_n)}[-\log q_\theta(y_n|\hat{z})] + \beta KL[p(\hat{z}|y_n), r(z)] \] (8) where the first term is a cross-entropy loss for learning discriminative features. Since we are dealing with a survival prediction task with labels containing survival time and censoring status, we use the Figure 2: Disentangled Transformer. The self-attention is employed to model the intra-modal interactions while a token sampled from the joint prototypical distribution is used to guide common information extraction through cross-attention. task-loss NLL in Eq.(4) as an alternative for the first term. Finally, combining the approximation term \( L_{\text{pro}} \), we obtain the total loss function for PIB to be minimized as follows: \[ L_{\text{PIB}} = \frac{1}{2N_t} \sum_{n=1}^{2N_t} \left\{ \alpha L_{\text{surv}}(z^n, t^n, c^n) + \beta KL[\mathcal{N}(z; \mu_n, \Sigma_n), r(z)] \right\} + \gamma L_{\text{pro}} \] where \( \mathcal{N}(z; \mu_n, \Sigma_n) = p(z|y_n) \), \( \alpha, \beta, \gamma \) are the hyperparameters which control the impact of items. As a result, the modeled PIB can guide the extraction of discriminative features and the removal of redundant information for each modality organized as bags. 3.3 Prototypical Information Disentanglement After eliminating redundancy from unimodal sources, we propose a Prototypical Information Disentanglement (PID) module to decouple the shared and specific representations, addressing the “inter-modal redundancy”. Suppose the instances selected by PIB are \( z_h^{(i)} \) and \( z_g^{(i)} \), we hope to decompose entangled multimodal data into ideally independent modality-common features \( C^{(i)} \) and modality-specific features \( S_h^{(i)}, S_g^{(i)} \). To achieve this, we reuse the joint prototypical distributions modeled by PIB for extracting common knowledge. These common features can be further used as guidance for learning modality-specific knowledge by enforcing specific knowledge independent from these shared features. Thus, we minimize the mutual information (MI) between common and specific factors to preserve modality-specific information. Consequently, our objective is to ensure the independence of specific representations within each modality and also the independence between common and specific features. The loss function of PID can be formally expressed as: \[ L_{\text{PID}} = I(S, C) + I(S_h, S_g), \quad \text{where } S = \text{Cat}(S_h, S_g) \] where, \( S \) denotes all specific representations obtained by concatenating \( \text{Cat}(\cdot) \) the features \( S_h, S_g \) from each modality. As MI is intractable, we introduce an upper bound CLUB (Cheng et al., 2020) to accomplish MI minimization in Eq.(10) (Details about CLUB can be found in Appendix B.3). To implement the above loss, we design a disentangled layer called disentangled transformer shown in Figure 2. This transformer models various interactions within the inputs thereby obtaining the features \( S_h, S_g \) and \( C \) required in Eq.(10). We initially extract the common information guided by the joint prototypical distribution, denoted as the joint posterior distribution \( p(z|x_h, x_g) \), which is defined by the product-of-experts (PoE) (Cao & Fleet, 2014), an idea of combining several distributions (“experts”) by multiplying them. Since we have previously obtained the positive prototype in PIB, which approximates the distribution \( p(z|x) \) of the patient’s risk band, the \( p(z|x_h, x_g) \) can be formulated into: \[ p(z|x_h, x_g) \propto p(z)p(z|x_h)p(z|x_g) \] where \( p(z|x_h) \approx \mathcal{N}(z; \mu^+_h, \Sigma^+_h) \), \( p(z|x_g) \approx \mathcal{N}(z; \mu^+_g, \Sigma^+_g) \) where \( p(z) \) is the prior distribution and \( p(z|x) \) approximately equals to the distributions of the positive prototypes \( \mathcal{N}(z; \mu^+_c, \Sigma^+_c) \). We assume the prior distribution \( p(z) \) is a spherical Gaussian \( \mathcal{N}(z; \mu_0, \Sigma_0) \), thus it can be shown that the product of Gaussian distributions is also a Gaussian \( p(z|x_h, x_g) = \mathcal{N}(z; \mu_c, \Sigma_c) \): \[ \Sigma_c = (\Sigma_0^{-1} + \sum_{i \in \{h,g\}} \mu_i^{-1})^{-1}, \quad \mu_c = (\mu_0 \Sigma_0^{-1} + \sum_{i \in \{h,g\}} \mu_i \Sigma_i^{-1}) \Sigma_c^{-1} \] Hence, we sample from \( p(z|x_h, x_g) \) to obtain a guiding token for shared information extraction. The modality-common representations \( C \) are then extracted by the cross-attention within the disentangled transformer. Moreover, for the modality-specific information, self-attention encodes pathway-to-pathway and patch-to-patch interactions, and their mean representation becomes \( S_h, S_g \). Thus, under the constraint of Eq. (10), we can simultaneously extract compact features that contain both specific and common information. **Overall Loss.** The final loss of PIBD is as follows, where \( L_{PIB}^h \) and \( L_{PIB}^g \) represent the PIB loss formulated in Eq.(9) for pathology and genomics modalities, respectively: \[ L = L_{\text{surv}} + L_{PIB}^h + L_{PIB}^g + \lambda L_{PID} \] where \( \lambda \) is the weight factor that controls the impact of loss item, as well as \( \alpha, \beta, \gamma \) in Eq.(9). Note that the proposed method can be extended to more multimodal data of the bag structure. **Inference.** The inference process differs from the training mainly in how we find the positive prototypes. During training, with known labels, we can directly obtain the joint prototypical distribution for PID. However, in inference, we need to identify the positive one from the set of prototypes. To achieve this, we first select instances with higher similarity scores calculated with all prototypes in Eq. (7). These selected instances are considered as relevant instances. Among them, the prototype with the highest proportion of relevant instances is considered positive. Hyperparameters such as the number of samples and information retention rate remain consistent with the training process. ### 4 EXPERIMENT #### 4.1 DATASET AND IMPLEMENTATION DETAILS We conduct extensive experiments over five public cancer datasets from TCGA\(^2\): Breast Invasive Carcinoma (BRCA), Bladder Urothelial Carcinoma (BLCA), Colon and Rectum Adenocarcinoma (COADREAD), Stomach Adenocarcinoma (STAD), and Head and Neck Squamous Cell Carcinoma (HNSC). We follow the work \cite{Jaume2023} to collect the biological pathways as genomics data. 5-fold cross-validation for each dataset is employed. The models are evaluated using the concordance index (C-index) \cite{HarrellJr1996} and its standard deviation (std) to quantify the performance of correctly ranking the predicted patient risk scores. We also visualize the Kaplan-Meier (KM) \cite{Kaplan1958} curves that can show the survival probability of different risk groups. The details of the dataset and experimental implementation can be found in Appendix C.1. #### 4.2 COMPARISONS WITH STATE-OF-THE-ARTS We compare our method with three groups of SOTA methods: (1) **Unimodal methods.** For pathways data, we adopt MLP \cite{Haykin1998}, SNN \cite{Klambauer2017}, and SNNTTrans \cite{Klambauer2017, Shao2021} as the genomic baselines. For histology, we compare with SOTA MIL methods ABMIL \cite{Ilse2018}, AMISL \cite{Yao2020}, TransMIL \cite{Shao2021}, and CLAM \cite{Lu2021}. (2) **Multimodal methods.** Four SOTA methods are compared in this group: Porpoise \cite{Chen2022b}, MCAT \cite{Chen2021}, MOTCat \cite{Xu2023}, and SurvPath \cite{Jaume2023}, where we adopt two late-fusion approaches including concatenation (Cat) and Kronecker product (KP) for both Porpoise and MCAT. Besides, a prediction-level combination using a CoxPH \cite{Cox1972} model of risk scores from the best-performing methods of genomics and histology is also conducted. (3) **Information theory-based methods.** As our work provides an information theory perspective on multimodal cancer survival prediction, we also compare it with information theory-based methods in multi-view, multi-modal, and task-specific fine-tuning domains, including CLAM-SB-FT \cite{Li2023a}, MIB \cite{Federici2020}, DeepIMV \cite{Lee2021}, and L-MIB \cite{Mai2022}. Note that although CLAM-SB-FT is an IB-based method for WSIs, it is designed within a fine-tuning framework and not be studied in multimodal survival prediction. **Comparison.** From the results in Table 1, we can observe that PIBD achieves the best overall performance across five cancer datasets. Compared with unimodal methods\(^†\), most multimodal methods\(^‡\) including ours show higher overall C-index, indicating that the information from both modalities gives great perspectives and contributions to survival prediction. Note that among multimodal methods, the proposed PIBD achieves superior performance in 4 out of 5 benchmarks and outperforms the second-best method by 1.6% in overall C-index, revealing the importance of addressing intra-modal and inter-modal redundancy. Then, from the comparison between IB-based --- \(^2\)https://portal.gdc.cancer.gov/ Table 1: C-index (mean ± std) over five cancer datasets. g. and h. refer to genomic modality and histological modality, respectively. The best results and the second-best results are highlighted in **bold** and in underline. A method marked with the subscript † falls into the unimodal group, ‡ into the multimodal group, and ⋆ into the information theory-based group. | Model | Modality | BRCA (N=869) | BLCA (N=359) | COADREAD (N=296) | HNSC (N=392) | STAD (N=317) | Overall | |----------------|----------|--------------|--------------|------------------|--------------|--------------|---------| | †MLP | g. | 0.622 ± 0.079 | 0.530 ± 0.077 | 0.712 ± 0.114 | 0.520 ± 0.064 | 0.497 ± 0.031 | 0.576 | | †SNN | g. | 0.621 ± 0.073 | 0.521 ± 0.070 | 0.711 ± 0.162 | 0.514 ± 0.076 | 0.483 ± 0.047 | 0.570 | | †SNNTrans | g. | 0.679 ± 0.053 | 0.583 ± 0.060 | 0.739 ± 0.124 | 0.570 ± 0.035 | 0.547 ± 0.041 | 0.622 | | †ABMIL | h. | 0.672 ± 0.051 | 0.624 ± 0.059 | 0.730 ± 0.151 | 0.624 ± 0.042 | 0.636 ± 0.043 | 0.657 | | †AMISL | h. | 0.681 ± 0.036 | 0.627 ± 0.032 | 0.710 ± 0.091 | 0.607 ± 0.048 | 0.553 ± 0.012 | 0.636 | | †TransMIL | h. | 0.663 ± 0.053 | 0.617 ± 0.045 | 0.747 ± 0.151 | 0.619 ± 0.062 | 0.660 ± 0.072 | 0.661 | | †CLAM-SB | h. | 0.675 ± 0.074 | 0.643 ± 0.044 | 0.717 ± 0.172 | 0.630 ± 0.048 | 0.616 ± 0.078 | 0.656 | | †CLAM-MB | h. | 0.696 ± 0.098 | 0.623 ± 0.045 | 0.721 ± 0.159 | 0.620 ± 0.034 | 0.648 ± 0.050 | 0.662 | | †SNNTrans+CLAM-MB | g.+h. | 0.699 ± 0.064 | 0.625 ± 0.060 | 0.716 ± 0.160 | 0.638 ± 0.066 | 0.629 ± 0.065 | 0.661 | | †Porpoise(Cat) | g.+h. | 0.668 ± 0.070 | 0.617 ± 0.056 | 0.738 ± 0.151 | 0.614 ± 0.058 | 0.660 ± 0.106 | 0.660 | | †Porpoise(KP) | g.+h. | 0.691 ± 0.038 | 0.619 ± 0.055 | 0.721 ± 0.157 | 0.630 ± 0.040 | 0.661 ± 0.085 | 0.664 | | †MCAT(Cat) | g.+h. | 0.685 ± 0.109 | 0.640 ± 0.076 | 0.724 ± 0.137 | 0.564 ± 0.840 | 0.625 ± 0.118 | 0.647 | | †MCAT(KP) | g.+h. | 0.727 ± 0.027 | 0.644 ± 0.062 | 0.709 ± 0.162 | 0.618 ± 0.093 | 0.643 ± 0.075 | 0.668 | | †MOTCat | g.+h. | 0.727 ± 0.027 | 0.659 ± 0.069 | 0.742 ± 0.124 | 0.656 ± 0.041 | 0.621 ± 0.065 | 0.681 | | †SurvPath | g.+h. | 0.724 ± 0.094 | 0.660 ± 0.054 | 0.758 ± 0.143 | 0.606 ± 0.080 | 0.667 ± 0.035 | 0.683 | | *CLAM-SB-FT | h. | 0.606 ± 0.110 | 0.633 ± 0.065 | 0.725 ± 0.150 | 0.620 ± 0.084 | 0.654 ± 0.051 | 0.648 | | *MIB | g.+h. | 0.602 ± 0.112 | 0.573 ± 0.036 | 0.711 ± 0.182 | 0.555 ± 0.055 | 0.588 ± 0.057 | 0.606 | | *DeepIMV | g.+h. | 0.659 ± 0.089 | 0.638 ± 0.054 | 0.749 ± 0.145 | 0.604 ± 0.061 | 0.597 ± 0.047 | 0.649 | | *L-MIB | g.+h. | 0.687 ± 0.071 | 0.662 ± 0.093 | 0.720 ± 0.167 | 0.615 ± 0.085 | 0.634 ± 0.060 | 0.664 | | *PIBD | g.+h. | 0.736 ± 0.072 | 0.667 ± 0.061 | 0.768 ± 0.124 | 0.640 ± 0.039 | 0.684 ± 0.035 | 0.699 | Figure 3: Kaplan-Meier curves of predicted high-risk (red) and low-risk (green) groups. A P-value < 0.05 indicates statistical significance, and the shaded regions represent the confident intervals. The median survival months are reported in the format of “high-risk: mean(std)/low-risk: mean(std)” methods, our method achieves superior performance on all cancer datasets, with 0.5%-4.9% performance gains. PIBD, which fully considers the characteristics of bag structure under weak supervision and is designed for multimodal cancer survival prediction, demonstrates its superiority. Kaplan-Meier analysis We further evaluate our method using statistical analysis, and the Kaplan-Meier curves are presented in Figure 3. Patients are separated into high-risk and low-risk groups based on predicted risk scores, with the median value of each validation set serving as the cut-off. Subsequently, we utilize the log-rank test to compute p-values, which assess the statistical significance of differences between these groups, and the median survival months are also reported for each group. Our approach demonstrates significantly improved discrimination between the two groups when compared to the second-best method, SurvPath. This effect is particularly pronounced in the BRCA, COADREAD, and HNSC datasets, with substantial margins of magnitude. 4.3 ABLATION STUDY Component validation. In Table 2, we ablate the designs mentioned in Sections 3.2 and 3.3, which are proposed for “inter-modal redundancy” and “intra-modal redundancy”. For ablating PIBD, we es- Table 2: Ablation study assessing C-index (mean ± std). | Variants | PIB | PID | BRCA | BLCA | COADREAD | HNSC | STAD | Overall | |--------------|-----|-----|--------|--------|----------|--------|--------|---------| | AP | | | 0.684 ± 0.044 | 0.619 ± 0.090 | 0.713 ± 0.161 | 0.567 ± 0.073 | 0.609 ± 0.048 | 0.638 | | PIB(AP) | ✓ | | 0.705 ± 0.108 | 0.593 ± 0.038 | 0.753 ± 0.143 | 0.623 ± 0.107 | 0.613 ± 0.071 | 0.657 | | TransMIL | | | 0.672 ± 0.088 | 0.636 ± 0.059 | 0.750 ± 0.133 | 0.591 ± 0.080 | 0.662 ± 0.090 | 0.662 | | PIB(TransMIL)| ✓ | | 0.696 ± 0.069 | 0.648 ± 0.074 | 0.757 ± 0.176 | 0.615 ± 0.062 | 0.643 ± 0.074 | 0.672 | | PIBD | ✓ | ✓ | 0.736 ± 0.072 | 0.667 ± 0.061 | 0.768 ± 0.124 | 0.640 ± 0.039 | 0.684 ± 0.035 | 0.699 | Table 3: Interventions in PIB. We conduct interventions by either removing the positive prototype or randomly deleting one of the negative prototypes. | Intervention | BLCA | COADREAD | STAD | |--------------|--------|----------|--------| | Positive | 0.401 ± 0.086 | 0.471 ± 0.196 | 0.384 ± 0.110 | | Negative | 0.645 ± 0.067 | 0.731 ± 0.106 | 0.672 ± 0.055 | | w/o Intervention | 0.667 ± 0.061 | 0.768 ± 0.124 | 0.684 ± 0.035 | Figure 4: Visualization of prototypes. Established two baselines: one involves direct average pooling (AP) on original features, and the other employs a non-disentangled TransMIL encoder as a strong baseline. We incorporate PIB into both baselines to assess the effectiveness of the prototypical features selected by PIB. As shown in the first four rows of Table 2, the addition of PIB outperforms the baselines in terms of higher C-index. This suggests that learning multiple distinctive prototypes in PIB and employing them to filter task-related features can effectively mitigate redundant features within each modality. For ablating PID, we conduct a comparison between our PIBD and the baseline using the non-disentangled TransMIL with PIB. The last two rows demonstrate that disentangling shared and specific information from multi-modal data effectively eliminates inter-modal redundancy, preventing the loss of modality-specific information during the fusion process and significantly enhancing the model’s performance. Moreover, we conduct more quantitative studies about parameter settings presented in Appendix C.2. Interpretability of PIB. To validate that the learned prototypes in PIB have modeled discriminative underlying distributions for different risk bands, we conduct random sampling on each prototype with a frequency of 2000. Subsequently, we reduce the obtained high-dimensional vectors to a two-dimensional plane using t-SNE [Van der Maaten & Hinton 2008]. As illustrated in Figure 4, the distributions exhibit excellent separability. Furthermore, inspired by the intervention in [Sarkar et al. 2022], we conduct interventions during the inference process shown in Table 3, and the results demonstrated a significant disparity. It can be seen that interventions in positive prototypes led to a dramatic decrease in the C-Index (all below 0.5), signifying a complete loss of predictive ability. Intervention in positive prototypes further results in passing a wrong guided signal to the following disentanglement module PID with an incorrect prototypical distribution as well, leading to worse performance. Conversely, when randomly removing a negative prototype, there was only a slight decline in the C-Index, which further underscores the effective modeling of discriminative risk-level distributions in PIB. Visualization of similarity scores for both modalities are presented in Appendix D. 5 CONCLUSION In this work, we explore multimodal cancer survival prediction inspired by information theory and propose a new framework called PIBD aimed at addressing both “intra-model redundancy” and “inter-model redundancy” challenges. First, we propose a Prototypical Information Bottleneck (PIB) that reduces redundancy while preserving task-related information. PIB models prototypes of various risk bands, allowing us to select discriminative features from massive instances and alleviating “intra-model redundancy”. Furthermore, to address “inter-modal redundancy”, we propose a Prototypical Information Disentanglement (PID) to decouple independent modality-common and modality-specific features with the guidance of the joint prototypical distribution. These compact features offer distinct perspectives and knowledge, effectively enhancing the network’s performance. Moreover, to handle the high-dimensional computational challenges inherent in our task, the PIB models prototypes approximating a bunch of instances by maximizing the cosine similarities within true labels. During this approximation, the choice of an appropriate similarity metric can contribute to better aligning spatial distributions, which warrants further investigation in future research endeavors. 6 ACKNOWLEDGMENTS This work was supported by National Natural Science Foundation of China (No. 62202403), Shenzhen Science and Technology Innovation Committee Funding (Project No. SGDX20210823103201011), the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. R6003-22 and C4024-22GF). REFERENCES Alexander A Alemi, Ian Fischer, Joshua V Dillon, and Kevin Murphy. Deep variational information bottleneck. In *International Conference on Learning Representations*, 2016. Gabriele Campanella, Matthew G Hanna, Luke Geneslaw, Allen Miraflor, Vitor Werneck Krauss Silva, Klaus J Busam, Edi Brogi, Victor E Reuter, David S Klimstra, and Thomas J Fuchs. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. *Nature medicine*, 25(8):1301–1309, 2019. Yanshuai Cao and David J Fleet. Generalized product of experts for automatic and principled fusion of gaussian process predictions. *arXiv preprint arXiv:1410.7827*, 2014. Richard J Chen, Ming Y Lu, Jingwen Wang, Drew FK Williamson, Scott J Rodig, Neal I Lindeman, and Faisal Mahmood. Pathomic fusion: an integrated framework for fusing histopathology and genomic features for cancer diagnosis and prognosis. *IEEE Transactions on Medical Imaging*, 41(4):757–770, 2020. Richard J Chen, Ming Y Lu, Wei-Hung Weng, Tiffany Y Chen, Drew FK Williamson, Trevor Manz, Maha Shady, and Faisal Mahmood. Multimodal co-attention transformer for survival prediction in gigapixel whole slide images. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 4015–4025, 2021. Richard J Chen, Chengkuan Chen, Yicong Li, Tiffany Y Chen, Andrew D Trister, Rahul G Krishnan, and Faisal Mahmood. Scaling vision transformers to gigapixel images via hierarchical self-supervised learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 16144–16155, 2022a. Richard J Chen, Ming Y Lu, Drew FK Williamson, Tiffany Y Chen, Jana Lipkova, Zahra Noor, Muhammad Shaban, Maha Shady, Mane Williams, Bumjin Joo, et al. Pan-cancer integrative histology-genomic analysis via multimodal deep learning. *Cancer Cell*, 40(8):865–878, 2022b. Yuanyuan Chen, Yongsheng Pan, Yong Xia, and Yixuan Yuan. Disentangle first, then distill: A unified framework for missing modality imputation and alzheimer’s disease diagnosis. *IEEE Transactions on Medical Imaging*, 2023. Mingyuan Cheng, Xinru Liao, Quan Liu, Bin Ma, Jian Xu, and Bo Zheng. Learning disentangled representations for counterfactual regression via mutual information minimization. In *Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval*, pp. 1802–1806, 2022. Pengyu Cheng, Weituo Hao, Shuyang Dai, Jiachang Liu, Zhe Gan, and Lawrence Carin. Club: A contrastive log-ratio upper bound of mutual information. In *International conference on machine learning*, pp. 1779–1788. PMLR, 2020. Yann Christinat and Wilhelm Krek. Integrated genomic analysis identifies subclasses and prognosis signatures of kidney cancer. *Oncotarget*, 6, 03 2015. doi: 10.18632/oncotarget.3294. David R Cox. Regression models and life-tables. *Journal of the Royal Statistical Society: Series B (Methodological)*, 34(2):187–202, 1972. David R Cox. Partial likelihood. *Biometrika*, 62(2):269–276, 1975. Yinglong Dai, Zheng Yan, Jiangchang Cheng, Xiaojun Duan, and Guojun Wang. Analysis of multimodal data fusion from an information theory perspective. *Information Sciences*, 623:164–183, 2023.
9NKRfhKgzI
I'm having difficulty understanding the reason for conditioning on u for formalizing Goal 2 and Goal 3, as the first goal's objective already minimizes the I(z,u), why does the method need to maximize/minimize the conditional mutual information instead of just the mutual information?
Adversarially Robust and Privacy-Preserving Representation Learning via Information Theory Anonymous authors Paper under double-blind review Abstract Machine learning models are vulnerable to both security attacks (e.g., adversarial examples) and privacy attacks (e.g., private attribute inference). Existing defenses propose different strategies to individually defend against the security attack or privacy attack, and combining them would yield suboptimal performance. In this paper, we aim to mitigate both the security and privacy attacks, and maintain utility of the learning task simultaneously. We achieve the goal by proposing a representation learning framework based on information theory, i.e., learning information-theoretic representations that are robust to adversarial examples and attribute inference adversaries, and effective for learning tasks as well. We also derive novel theoretical results, e.g., the inherent tradeoff between adversarial robustness/utility and attribute privacy, and guaranteed attribute privacy leakage against attribute inference adversaries. 1 Introduction Machine learning (ML) has achieved remarkable breakthroughs in many research fields, including but not limited to computer vision, speech, and natural language processing. However, recent works show that the current ML design is vulnerable to both security and privacy attacks, e.g., adversarial examples and private attribute inference. Adversarial examples (Szegedy et al., 2013; Goodfellow et al., 2015; Carlini & Wagner, 2017) are typically generated by carefully adding imperceptible perturbations to natural data and they remain a serious problem that prevents the deployment of modern ML models in safety-critical applications such as autonomous driving (Eykholt et al., 2018) and medical imaging (Bortsova et al., 2021). In addition, many real-world applications involve data that contain sensitive/private information, such as race, gender, income, and age. When applying ML to these applications, it poses a great challenge since private attribute can often be accurately inferred (Jia et al., 2017; Aono et al., 2017; Melis et al., 2019). To mitigate adversarial examples and attribute inference attacks, many defenses have been proposed, but they mainly follow two separate lines and with different techniques. For instance, the state-of-the-art defenses against adversarial examples are based on adversarial training (Madry et al., 2018; Zhang et al., 2019; Wang et al., 2019), which solves a min-max optimization problem. In contrast, the representative defense against inference attacks are based on differential privacy (Abadi et al., 2016), which is a statistical method (more details on defenses against adversarial examples and attribute inference attacks are in Section 2). Some works (Song et al., 2019b,a) show that adversarially robust models only can even leak more private information (also verified in our Section 5.2). In addition, we observe that combining the state-of-the-art defenses against adversarial examples and attribute inference attacks produce suboptimal performance (see results in Section 5.3). In this paper, we focus on the research question: 1) Can we design an adversarially robust and attribute privacy protection model, while maintaining utility of (unknown) downstream tasks simultaneously? 2) Further, can we theoretically understand the relationships among adversarial robustness, utility, and attribute privacy? To achieve the goal, we propose an information-theoretic defense framework through the lens of representation learning, termed ARPRL. Representation learning is very pertinent in today’s context given the rise of foundation/large ML models. Particularly, instead of training large models from scratch, which requires huge computational resources and is time consuming, shared learnt representations ensures the community to save much time and costs. Our ARPRL is partly inspired by Zhu et al. (2020); Zhou et al. (2022), which show adversarially robust representations based defenses outperform the de facto adversarial training based methods, while being the first work to non-trivially generalize learning data representations that are robust to both adversarial examples and attribute inference adversaries. More specifically, we formulate learning representations via three mutual information (MI) objectives: one for adversarial robustness, one for attribute privacy protection, and one for utility preservation. We point out that our ARPRL is task-agnostic, meaning the learnt representations does not need to know the target task at hand and can be used for any downstream task. However, directly solving the MI objectives is challenging, as calculating an MI between two arbitrary variables is often infeasible (Peng et al., 2019). To address it, we are motivated by the MI neural estimation (Alemi et al., 2017; Belghazi et al., 2018). which converts the intractable MI calculations to the tractable variational MI bounds. Then we parameterize each bound with a neural network, and finally train the neural networks to approximate the true MI. Based on our designed MI objectives, we can derive novel theoretical results. For instance, we obtain an inherent tradeoff between adversarial robustness and attribute privacy, as well as between utility and attribute privacy. These tradeoffs are also verified through the experimental evaluations on multiple benchmark datasets. We also derive the guaranteed attribute privacy leakage against (worst-case) attribute inference adversaries. Our key contributions can be summarized below: - This is the first work to advocate learning both robust and privacy-preserving ML models from the representation learning perspective. - We formulate learning adversarially robust and privacy-preserving representations via information theory—an elegant yet powerful tool. - Under the information-theoretic framework, we derive novel theoretical results: the tradeoff among adversarial robustness, utility, and attribute privacy, and guaranteed attribute privacy leakage. 2 RELATED WORK Defenses against adversarial examples. Many efforts have been made to improve the adversarial robustness of ML models against adversarial examples (Goodfellow et al., 2015; Kurakin et al., 2017; Pang et al., 2019; Wong & Kolter, 2018; Mao et al., 2019; Cohen et al., 2019; Zhai et al., 2020; Wong et al., 2020). Among them, adversarial training based defenses (Madry et al., 2018; Zhang et al., 2019; Wang et al., 2019; Dong et al., 2020; Zhou et al., 2021) has become the mainstream defense and achieved the state-of-the-art defense effectiveness. At a high level, adversarial training augments training data with adversarial examples (e.g., via FGSM attack (Szegedy et al., 2013), CW attack (Carlini & Wagner, 2017), PGD attack (Madry et al., 2018), AutoAttack (Croce & Hein, 2020)) and uses a min-max formulation to train the target ML model (Madry et al., 2018). However, as pointed out by Zhu et al. (2020); Zhou et al. (2022), the dependence between the output of the target model and the input/adversarial examples has not been well studied, which makes the ability of adversarial training not fully exploited. To improve it, Zhu et al. (2020); Zhou et al. (2022) propose to learn adversarially-robust representations via mutual information, which is shown to outperform the state-of-the-art adversarial training based defenses. Our ARPRL is inspired by them while having a nontrivial generalization to learn both robust and privacy-preserving representations. Defenses against inference attacks. Existing privacy-preserving methods against inference attacks can be roughly classified as adversarial learning (Oh et al., 2017; Wu et al., 2018; Pittaluga et al., 2019; Liu et al., 2019), differential privacy (Shokri & Shmatikov, 2015; Abadi et al., 2016), and information obfuscation (Bertran et al., 2019; Hamm, 2017; Osia et al., 2020b; Roy & Boddeti, 2019; Zhao et al., 2020; Osia et al., 2020a; Li et al., 2021). Adversarial learning methods are mainly inspired by GAN (Goodfellow et al., 2014) and they learn obfuscated features from the training data so that their privacy information cannot be inferred from a learnt model. However, these methods need to know the primary task in advance and lack of formal privacy guarantees. Differential privacy methods have formal privacy guarantees, but they have high utility losses. Information obfuscation methods aim to maximize the utility, under the constraint of bounding the information leakage, but almost all of them are empirical and task-dependent. The only exception is Zhao et al. (2020), which has guaranteed information leakage. However, this works requires stronger assumptions (e.g., conditional independence assumption between variables). Our work can be seen as a combination of information obfuscation with adversarial learning to learn both robust and privacy-preserving representations. It provides privacy leakage guarantees as well as inherent tradeoffs between robustness/utility and privacy. 3 PRELIMINARIES AND PROBLEM SETUP Notations. We use $s$, $\mathbf{s}$, and $\mathcal{S}$ to denote (random) scalar, vector, and space, respectively. Given a data $x \in \mathcal{X}$, we denote its label as $y \in \mathcal{Y}$ and private attribute as $u \in \mathcal{U}$, where $\mathcal{X}$, $\mathcal{Y}$, and $\mathcal{U}$ are input data space, label space, and attribute space, respectively. An $l_p$ ball centered at a data $x$ with radius $\epsilon$ is defined as $\mathcal{B}_p(x, \epsilon) = \{x' \in \mathcal{X} : \|x' - x\|_p \leq \epsilon\}$. The joint distribution of $x$, $y$, and $u$ is denoted as $\mathcal{D}$. We further denote $f : \mathcal{X} \rightarrow \mathcal{Z}$ as the representation learner that maps $x \in \mathcal{X}$ to its representation $z \in \mathcal{Z}$, where $\mathcal{Z}$ is the representation space. Moreover, we let $C : \mathcal{Z} \rightarrow \mathcal{Y}$ be the primary task classifier, which predicts data label $y$ based on the learnt data representation $z$, and $A : \mathcal{Z} \rightarrow \mathcal{U}$ be the attribute inference classifier, which infers the private attribute $u$ based on the representation $z$. The composition function of two functions $f$ and $g$ is denoted as $(g \circ f)(x) = g(f(x))$. We use $[m]$ to denote the set $\{1, 2, \cdots, m\}$ and $|\cdot|$ to denote its cardinality. Mutual information (MI) and entropy. In information theory, MI is a measure of shared information between two random variables, and offers a quantifiable metric for the amount of information leakage on one variable. given the other. Let \((x, z)\) be a pair of random variables with values over the space \(X \times Z\). Then the MI of \(x\) and \(z\) is defined as \[ I(x; z) = \int_Z \int_X p(x, z) \log \frac{p(x, z)}{p(x)p(z)} dx dz. \] Intuitively, \(I(x; z)\) tells us how well one can predict \(z\) from \(x\) (and \(x\) from \(z\), since it is symmetric). By definition, \(I(x; z) = 0\) if \(x\) and \(z\) are independent, i.e., \(x \perp z\). On the other hand, when \(x\) and \(z\) are identical, \(I(x; x) = H(x) = \int_X -p(x) \log p(x) dx\), which is the entropy of \(x\). **Adversarial example/perturbation, adversarial risk, and representation vulnerability** [Zhu et al. (2020)]. Let \(X\) and \(Y\) be the data space and label space, respectively, and \(\epsilon\) as the \(l_p\) perturbation budget. For any classifier \(C : X \rightarrow Y\), the adversarial risk of \(C\) with respect to \(\epsilon\) is defined as: \[ \text{AdvRisk}_\epsilon(C) = \Pr[\exists x' \in B_p(x, \epsilon), \text{s.t. } C(x') \neq y] = \sup_{x' \in B_p(x, \epsilon)} \Pr[C(x') \neq y], \] where \(x'\) is called adversarial example and \(\delta = x' - x\) is adversarial perturbation with \(\|\delta\|_p \leq \epsilon\). Formally, adversarial risk captures the vulnerability of a classifier to adversarial perturbations. When \(\epsilon = 0\), adversarial risk reduces to the standard risk, i.e., \(\text{AdvRisk}_0(C) = \text{Risk}(C) = \Pr(C(x) \neq y)\). Motivated by the empirical and theoretical difficulties of robust learning with adversarial examples, Zhu et al. (2020); Zhou et al. (2022) target learning adversarially robust representations based on MI. They introduced the term representation vulnerability: Given a representation learner \(f : X \rightarrow Z\) and an \(l_p\) perturbation budget \(\epsilon\), the representation vulnerability of \(f\) with respect to \(\epsilon\) is defined as \[ \text{RV}_\epsilon(f) = \max_{x' \in B_p(x, \epsilon)} [I(x; z) - I(x'; z')], \] where \(z = f(x)\) and \(z' = f(x')\) are the learnt representation for \(x\) and \(x'\), respectively. We note that higher/smaller \(\text{RV}_\epsilon(f)\) values imply the representation is less/more robust to adversarial perturbations. Further, Zhu et al. (2020) linked the connection between adversarial robustness and representation vulnerability through the following theorem: **Theorem 1** [Zhu et al. (2020)]. Consider all the primary task classifiers as \(C = \{C : Z \rightarrow Y\}\). Given the perturbation budget \(\epsilon\), for any representation learner \(f : X \rightarrow Z\), \[ \inf_{C \in C} \text{AdvRisk}_\epsilon(C \circ f) \geq 1 - \left(I(x; z) - \text{RV}_\epsilon(f) + \log 2\right) / \log |Y|. \] The theorem states that a smaller representation vulnerability implies a smaller adversarial risk, which means better adversarial robustness, and vice versa. Finally, \(f\) is called \((\epsilon, \tau)\)-robust if \(\text{RV}_\epsilon(f) \leq \tau\). **Attribute inference attacks and advantage.** Without loss of generality, we assume the attribute space \(U\) is binary. Let \(A\) be the set of all binary attribute inference classifiers that takes data representations \(z = f(x)\) as an input and infers the private attribute \(u\), i.e., \(A = \{A : Z \rightarrow U = \{0, 1\}\}\). Then, we formally define the attribute inference advantage of the worst-case attribute inference adversary with respect to the joint distribution \(D = \{x, y, u\}\) as below: \[ \text{Adv}_D(A) = \max_{A \in A} |\Pr_D(A(z) = a|u = a) - \Pr_D(A(z) = a|u = 1 - a)|, \forall a = \{0, 1\}. \] We can observe that: if \(\text{Adv}_D(A) = 1\), an adversary can completely infer the privacy attribute through the learnt representations. In contrast, if \(\text{Adv}_D(A) = 0\), an adversary obtains a random guessing inference performance. To protect the private attribute, we aim to obtain a small \(\text{Adv}_D\). **Threat model and problem setup.** We focus on a classification task under the adversarial setting. We consider the attacker’s goal is to perform both attribute inference and adversarial example attacks. We assume the attacker does not have access to the internal representation learner (i.e., \(f\)), but instead can obtain and arbitrarily use the shared data representations.\(^1\) The attacker is also assumed to have some background knowledge (e.g., even know the underlying data distribution). As the defense is task-agnostic, the defender does not know the learning task. Our goal is to learn task-agnostic representations that are adversarially robust, protect attribute privacy, and maintain the utility of (unknown) downstream tasks. Formally, given \(\{x, y, u\}\) from an underlying distribution \(D\), and a perturbation budget \(\epsilon\), we aim to obtain the representation learner \(f\) such that the representation vulnerability \(\text{RV}_\epsilon(f)\) is small, attribute inference advantage \(\text{Adv}_D(A)\) is small, and the risk \(\text{Risk}(C)\) is small. --- \(^1\)This is practical when representation learner is deployed as an API: end-users obtain the representations via querying the API with their data, but do not know the details about the representation learner. Note that many companies have deployed representation learner as an API to provide the machine learning service, e.g., Amazon’s AWS Marketplace (AWS Marketplace), OpenAI’s Embedding API (chat), and Clarifai’s General Embedding API (Clarifai). 4 DESIGN OF ARPRL In this section, we will design our adversarially robust and privacy-preserving representation learning method, termed ARPRL, inspired by information theory. 4.1 FORMULATING ARPRL VIA MI OBJECTIVES Given a data \( x \) with private attribute \( u \) sampled from a distribution \( D \), and a perturbation budget \( \epsilon \), our purpose is to convert \( x \) into a representation \( z = f(x) \) that satisfies the following three goals: - **Goal 1: Privacy protection.** \( z \) contains as less information as possible about the private attribute \( u \). Ideally, when \( z \) does not include information about \( u \), i.e., \( z \perp u \), it is impossible to infer \( u \) from \( z \). - **Goal 2: Utility preservation.** \( z \) should be useful for many downstream tasks. To achieve the goal, we require \( z \) should include as much information about the data \( x \) as possible, while excluding the private attribute \( u \). Ideally, when \( z \) retains the most information about \( x \), the model trained on \( z \) will have the same performance as the model trained on the raw \( x \) (though we do not know the downstream task), thus preserving utility. - **Goal 3: Adversarially robust.** \( z \) should be not sensitive to adversarial perturbations on the data \( x \), indicating a small representation vulnerability. We propose to formalize the above goals via MI. Formally, we quantify the goals as below: \[ \text{Formalizing Goal 1:} \quad \min_f I(z; u); \tag{6} \] \[ \text{Formalizing Goal 2:} \quad \max_f I(x; z|u); \tag{7} \] \[ \text{Formalizing Goal 3:} \quad \min_f \{ RV_\epsilon(f|u) = \max_{x' \in B_p(x, \epsilon)} [I(x; z|u) - I(x'; z'|u)] \}. \tag{8} \] where 1) we minimize \( I(z; u) \) to maximally reduce the correlation between \( z \) and the private attribute \( u \); 2) \( I(x; z|u) \) is the MI between \( x \) and \( z \) given \( u \). We maximize such MI to keep the raw information in \( x \) as much as possible in \( z \) and remove the information that \( x \) contains about the private \( u \); 3) \( RV_\epsilon(f|u) \) is the representation vulnerability of \( f \) conditional on \( u \) with respect to \( \epsilon \). Minimizing it learns adversarially robust representations that exclude the information about private \( u \). Note that \( I(x; z|u) \) in Equation (8) can be merged with that in Equation (7). Hence Equation (8) can be reduced to the below min-max optimization problem: \[ \max_f \min_{x' \in B_p(x, \epsilon)} I(x'; z'|u). \tag{9} \] **Objective function of ARPRL:** Combining the above equations, we have the MI objective function to learn adversarially robust and privacy preserving representations as below: \[ \max_f [-\alpha I(z; u) + \beta \min_{x' \in B_p(x, \epsilon)} I(x'; z'|u) + (1 - \alpha - \beta) I(x; z|u)], \tag{10} \] where \( \alpha, \beta \in [0, 1] \) are tradeoff hyperparameters. That is, a larger/smaller \( \alpha \) indicates a stronger/weaker attribute privacy protection and a larger/smaller \( \beta \) indicates a stronger/weaker robustness against adversarial perturbations. 4.2 ESTIMATING MI VIA TRactable VARIATIONAL BOUNDS The key challenge of solving Equation (10) is that calculating an MI between two arbitrary random variables is likely to be infeasible (Peng et al., 2019). To address it, we are inspired by the existing MI neural estimation methods (Alemi et al., 2017; Belghazi et al., 2018; Oord et al., 2018; Poole et al., 2019; Hjelm et al., 2019; Cheng et al., 2020), which convert the intractable exact MI calculations to the tractable variational MI bounds. Then, we parameterize each variational MI bound with a neural network, and train the neural networks to approximate the true MI. We clarify that we do not design novel MI neural estimators, but adopt existing ones to assist our customized MI terms for learning adversarially robust and privacy-preserving representations. **Minimizing upper bound MI in Equation (6) for privacy protection.** We propose to adapt the variational upper bound CLUB proposed in (Cheng et al., 2020). Specifically, \[ I(z; u) \leq I_{vCLUB}(z; u) = \mathbb{E}_{p(z,u)}[\log q_\Psi(u|z)] - \mathbb{E}_{p(z)p(u)}[\log q_\Psi(u|z)], \tag{11} \] where \( q_\Psi(u|z) \) is an auxiliary posterior distribution of \( p(u|z) \) and it needs to satisfy the condition: \[ KL(p(z,u)||q_\Psi(z,u)) \leq KL(p(z)p(u)||q_\Psi(z,u)). \tag{12} \] To achieve this, we need to minimize: \[ \min_\Psi KL(p(z,u)||q_\Psi(z,u)) = \min_\Psi KL(p(u|z)||q_\Psi(u|z)) \] \[ = \min_\Psi \mathbb{E}_{p(z,u)}[\log p(u|z)] - \mathbb{E}_{p(z,u)}[\log q_\Psi(u|z)] \iff \max_\Psi \mathbb{E}_{p(z,u)}[\log q_\Psi(u|z)], \] where we use that \( \mathbb{E}_{p(z,u)}[\log p(u|z)] \) is irrelevant to \( \Psi \). Finally, our Goal 1 for privacy protection is reformulated as solving the min-max objective function: $$\min_{f} \min_{\Psi} I_{vCLUB}(z; u) \iff \min_{f} \max_{\Psi} E_{p(z,u)}[\log q_{\Psi}(u|z)].$$ (13) Remark. We note that Equation (13) can be interpreted as an adversarial game between: (1) an adversary $q_{\Psi}$ (i.e., attribute inference classifier) who aims to infer the private attribute $u$ from the representation $z$; and (2) a defender (i.e., the representation learner $f$) who aims to protect the private attribute $u$ from being inferred. Maximizing lower bound MI in Equation (7) for utility preservation. We adopt the MI estimator proposed in Nowozin et al. (2016) to estimate the lower bound of the MI Equation (7). Specifically, $$I(x; z|u) = H(x) - H(x|z, u)$$ $$= H(x) + E_{p(x,z,u)}[\log p(x|z, u)]$$ $$= H(x) + E_{p(x,z,u)}[\log q_{\Omega}(x|z, u)] + E_{p(x,z,u)}[KL(p(\cdot|z, u)||q_{\Omega}(\cdot|z, u))]$$ $$\geq H(x) + E_{p(x,z,u)}[\log q_{\Omega}(x|z, u)],$$ (14) where $q_{\Omega}$ is an arbitrary auxiliary posterior distribution that aims to maintain the information $x$ in the representation $z$ conditioned on the private $u$. Since $H(x)$ is a constant, our Goal 2 can be rewritten as the below max-max objective function: $$\max_{f} I(x; z|u) \iff \max_{f,\Omega} E_{p(x,z,u)}[\log q_{\Omega}(x|z, u)].$$ (15) Remark. We note that Equation (15) can be interpreted as a cooperative game between the representation learner $f$ and $q_{\Omega}$ who aim to preserve the utility collaboratively. Maximizing the worst-case MI in Equation (9) for adversarial robustness. To solve Equation (9), one needs to first find the perturbed data $x' \in B_p(x, \epsilon)$ that minimizes MI $I(x'; z'|u)$, and then maximizes this MI by training the representation learner $f$. As claimed in Zhu et al. (2020); Zhou et al. (2022), minimizing the MI on the worst-case perturbed data is computational challenging. An approximate solution (Zhou et al. 2022) is first performing a strong white-box attack, e.g., the projected gradient descent (PGD) attack (Madry et al. 2018), to generate a set of adversarial examples, and then selecting the adversarial example that has the smallest MI. Assume the strongest adversarial example is $x^a = \arg\min_{x' \in B_p(x, \epsilon)} I(x'; z'|u)$. The next step is to maximize the MI $\max_f I(x^a; z^a|u)$. Zhu et al. (2020) used the MI Neural Estimation (MINE) method (Belghazi et al. 2018) to estimate this MI. Specifically, $$I(x^a; z^a|u) \geq I_{\Lambda}(x^a; z^a|u) = E_{p(x^a,z^a,u)}[t_{\Lambda}(x^a, z^a, u)] - \log E_{p(x^a)p(z^a)p(u)}[\exp(t_{\Lambda}(x^a, z^a, u))],$$ (16) where $t_{\Lambda}: X \times Z \times \{0, 1\} \rightarrow \mathbb{R}$ can be any family of neural networks parameterized with $\Lambda$. More details about calculating the MI are referred to Section 4.3. Objective function of ARPRL. By using the above MI bounds, the objective function of ARPRL is as follows: $$\max_f (\alpha \min_{\Psi} E_{p(x,u)}[\log q_{\Psi}(u|f(x))] + \beta \max_{\Lambda} I_{\Lambda}(x^a; z^a|u) + (1 - \alpha - \beta) \max_{\Omega} E_{p(x,u)}[\log q_{\Omega}(x|f(x), u)]).$$ (17) where $\alpha, \beta \in [0, 1]$ tradeoff between privacy and utility, and robustness and utility, respectively. That is, a larger/smaller $\alpha$ indicates a stronger/weaker attribute privacy protection and a larger/smaller $\beta$ indicates a stronger/weaker robustness against adversarial perturbations. 4.3 Implementation in Practice via Training Parameterized Neural Networks In practice, Equation (17) is solved via training four neural networks, i.e., the representation learner $f_{\Theta}$ (parameterized with $\Theta$), privacy-protection network $g_{\Psi}$ associated with the auxiliary distribution $q_{\Psi}$, robustness network $t_{\Lambda}$ associated with the MINE estimator, and utility-preservation network $h_{\Omega}$ associated with the auxiliary distribution $q_{\Omega}$, on a set of training data. Suppose we have collected a set of samples $\{(x_j, y_j, u_j)\}$ from the dataset distribution $D$. We can then approximate each term in Equation (17). Specifically, we approximate the expectation associated with the privacy-protection network network $g_{\Psi}$ as $$E_{p(u,x)} \log q_{\Psi}(u|f(x)) \approx - \sum_j CE(u_j, g_{\Psi}(f(x_j))),$$ where $CE(\cdot)$ means the cross-entropy loss function. Figure 1: Overview of ARPRL. Further, we approximate the expectation associated with the utility-preservation network \( h_\Omega \) via the Jensen-Shannon (JS) MI estimator \(\hat{I}\) \citep{Hjelm2019}. That is, \[ \mathbb{E}_{p(x,u)} \log q_\Omega(x|f(x), u) \approx \hat{I}^{(JS)}_{\Theta,\Omega}(x; f(x), u) = \mathbb{E}_{p(x,u)} [-\text{sp}(-h_\Omega(x, f(x), u))] - \mathbb{E}_{p(x,u)} [\text{sp}(h_\Omega(x, f(x), u))], \] where \( x \) is an independent and random sample from the same distribution as \( x \), and the expectation can be replaced by the samples \( \{x_j^i, \tilde{x}_j^i, u_j^i\} \). \( \text{sp}(z) = \log(1 + \exp(z)) \) is the softplus function. Regarding the MI related to the robustness network \( t_\Lambda \), we can adopt the methods proposed in \citet{Zhu2020,Zhou2022}. For instance, \citet{Zhu2020} proposed to avoid searching the whole ball, and restrict the search space to be the set of empirical distributions with, e.g., \( m \) samples: \( S_m(\epsilon) = \left\{ \frac{1}{m} \sum_{i=1}^m \delta_{x_i'} : x_i' \in B_p(x_i, \epsilon), \forall i \in [m] \right\} \). Then it estimates the MI \( \min_{x' \in S_m(\epsilon)} I(x'; f(x'))|u \) as \[ \min_{x'} I^{(m)}_\Lambda(x'; f(x'))|u \quad \text{s.t. } x' \in S_m(\epsilon), \] where \( I^{(m)}_\Lambda(x'; f(x'))|u = \frac{1}{m} \sum_{i=1}^m t_\Lambda(x_i', f(x_i), u_i) - \log[\frac{1}{m} \sum_{i=1}^m e^{t_\Lambda(x_i', f(x_i), u_i)}] \), where \( \{x_i'\} \) are independent and random samples that have the same distribution as \( \{x_i\} \). \citet{Zhu2020} propose an alternating minimization algorithm to solve Equation (18). Specifically, it alternatively performs gradient ascent on \( \Lambda \) to maximize \( I^{(m)}_\Lambda(x'; f(x'))|u \) given \( S_m(\epsilon) \), and then searches for the set of worst-case perturbations on \( \{x_i' : i \in [m]\} \) given \( \Lambda \) based on, e.g., projected gradient descent. More details of solving Equation (18) are referred to \citet{Zhu2020}. Figure 1 overviews our ARPRL. Algorithm 1 in Appendix details the training of ARPRL. ### 4.4 Theoretical Results We mainly consider binary private attributes and binary classification. We will leave it as future work to generalize our results to multi-value attributes and multi-class classification.\(^2\) All proofs are in Appendix A. **Robustness vs. Representation Vulnerability.** We first show the relationship between adversarial risk (or robustness) and representation vulnerability in ARPRL. **Theorem 2.** Let all binary task classifiers be \( C = \{C : Z \rightarrow Y\} \). Then for any representation learner \( f : X \rightarrow Z \), we have \[ \inf_{C \in C} \text{AdvRisk}_\epsilon(C \circ f) \geq \frac{1}{\log 2} (\text{RV}_\epsilon(f|u) - I(x; z|u)). \] **Remark.** Similar to Theorem 1, Theorem 2 shows a smaller representation vulnerability \( \text{RV}_\epsilon(f|u) \) indicates a smaller adversarial risk, which means better robustness. In addition, a larger MI \( I(x; z|u) \) (Goal 2 for utility preservation) produces a smaller adversarial risk, also implying better robustness. **Utility vs. Privacy Tradeoff.** The following theorem shows the tradeoff between utility and privacy: **Theorem 3.** Let \( z = f(x) \) be with a bounded norm \( R \) (i.e., \( \max_{z \in Z} \|z\| \leq R \)), and \( A \) be the set of all binary inference classifiers that take \( z \) as an input. Assume the task classifier \( C \) is \( C_L \)-Lipschitz, i.e., \( \|C\|_L \leq C_L \). Then, we have the below relationship between the standard risk and the advantage: \[ \text{Risk}(C \circ f) \geq \Delta_{y|u} - 2R \cdot C_L \cdot \text{Adv}_D(A), \] where \( \Delta_{y|u} = |\Pr_D(y = 1|u = 0) - \Pr_D(y = 1|u = 1)| \) is a dataset-dependent constant. **Remark.** Theorem 3 says that any task classifier using learnt representations incurs a risk on at least a private attribute value. Specifically, the smaller the advantage \( \text{Adv}_D(A) \) (meaning less attribute privacy is leaked), the larger the lower bound risk, and vice versa. Note that the lower bound is independent of the adversary, meaning it covers the worst-case attribute inference adversary. Hence, Equation (20) reflects an inherent tradeoff between utility preservation and attribute privacy leakage. **Robustness vs. Privacy Tradeoff.** Let \( D' \) be a joint distribution over the adversarially perturbed input \( x' \), sensitive attribute \( u \), and label \( y \). By assuming the representation space is bounded by \( R \), the perturbed representations also satisfy \( \max_{z' \in Z} \|z'\| \leq R \), where \( z' = f(x') \). Following Equation (5), we have an associated adversary advantage \( \text{Adv}_{D'}(A) \) with respect to the joint distribution \( D' \). Similarly, \( \text{Adv}_{D'}(A) = 1 \) means an adversary can completely infer the privacy attribute \( u \) through the learnt adversarially perturbed representations \( z' \), and \( \text{Adv}_{D'}(A) = 0 \) implies an adversary only obtains a random guessing inference performance. Then we have the following theorem: --- \(^2\) Zhao et al. (2020) that also has theoretical results of privacy protection against attribute inference attacks. The differences between theirs and our theoretical results are discussed in Appendix A.4. Figure 2: 2D representations learnt by ARPRL. (a) Raw data; (b) only robust representations (privacy acc: 99%, robust acc: 88%, test acc: 99%); and (c) robust + privacy preserving representations (privacy acc: 55%, robust acc: 75%, test acc: 85%). red vs. blue: binary private attribute values; cross × vs. circle ○: binary task labels. **Theorem 4.** Let \( z' = f(x') \) be the learnt representation for \( x' \in B(x, \epsilon) \) with a bounded norm \( R \) (i.e., \( \max_{z' \in Z} \|z'\| \leq R \)), and \( A \) be the set of all binary inference classifiers. Under a \( C_L \)-Lipschitz task classifier \( C \), we have the below relationship between the adversarial risk and the advantage: \[ \text{AdvRisk}_e(C \circ f) \geq \Delta_y|u - 2R \cdot C_L \cdot \text{Adv}_{D'}(A). \] (21) **Remark.** Likewise, Theorem 4 states that, any task classifier using adversarially learnt representations has to incur an adversarial risk on at least a private attribute value. Moreover, the lower bound covers the worst-case adversary. Equation (21) hence reflects an inherent tradeoff between adversarial robustness and privacy. **Guaranteed Attribute Privacy Leakage.** The attribute inference accuracy induced by the worst-case adversary is bounded in the following theorem: **Theorem 5.** Let \( z \) be the learnt representation by Equation (17). For any attribute inference adversary \( A = \{ A : Z \rightarrow U = \{0, 1\} \}, Pr(A(z) = u) \leq 1 - \frac{H(u|z)}{2 \log_2(6/H(u|z))}. **Remark.** Theorem 5 shows that when the conditional entropy \( H(u|z) \) is larger, the inference accuracy induced by any adversary is smaller, i.e., less attribute privacy leakage. From another perspective, as \( H(u|z) = H(u) - I(u; z) \), achieving the largest \( H(u|z) \) implies minimizing \( I(u; z) \) (note that \( H(u) \) is a constant)—This is exactly our Goal 1 aims to achieve. ## 5 EVALUATIONS We evaluate ARPRL on both synthetic and real-world datasets. The results on the synthetic dataset is for visualization and verifying the tradeoff purpose. ### 5.1 EXPERIMENTAL SETUP We train the neural networks via Stochastic Gradient Descent (SGD), where the local batch size is 100 and we use 10 local epochs and 50 global epochs in all datasets. The learning rate in SGD is set to be \( 1e^{-3} \). The detailed network architecture is shown in Table B.2 in Appendix B.2. The hyperparameters used in the adversarially robust network are following Zhu et al. (2020). We also discuss how to choose the hyperparameters \( \alpha \) and \( \beta \) in real-world datasets in Appendix B.3. Without loss of generality, we consider the most challenging \( l_\infty \) perturbation. Following Zhu et al. (2020), we use the PGD attack (Madry et al., 2018) for both generating adversarial perturbations in the estimation of worst-case MI and evaluating model robustness. We implement ARPRL in PyTorch and use the NSF Chameleon Cloud GPUs (Keahey et al., 2020) (CentOS7-CUDA 11 with Nvidia Rtx 6000) to train the model. We evaluate ARPRL on three metrics: utility preservation, adversarial robustness, and privacy protection. Our source code will be publicly available upon paper acceptance. ### 5.2 RESULTS ON A TOY EXAMPLE We generate 2 2D circles with the center \((0.0, 0.0)\) and \((1.0, 0.0)\) respectively, and the radius 0.25, and data points are on the circumference. Each circle indicates a class and has 5,000 samples, where 80% of the samples are for training and the remaining 20% for testing. We define the binary private attribute value for each data point as whether the \( y \)-value is above or below the \( x \)-axis. The network architectures are shown in Table B.2 in Appendix. We use an \( l_\infty \) perturbation budget \( \epsilon = 0.01 \) and 10 PGD attack steps with step size 0.1. We visualize the learnt --- 3 Note that our goal in this paper is not to design the best adversarial attack, i.e., generating the optimal adversarial perturbation. Hence, the achieved adversarial robustness might not be optimal. We also test CelebA against the CW attack (Carlini & Wagner, 2017), and the robust accuracy is 85%, which close to 87% with the PGD attack. representations via 2D t-SNE (Van der Maaten & Hinton 2008) in Figure 2. We can see that: by learning only robust representations, the 2-class data can be well separated, but their private attribute values can be also completely separated—almost 100% privacy leakage. In contrast, by learning both robust and privacy-preserving representations, the 2-class data can be separated, but their private attributes are mixed—only 55% inference accuracy. Note that the optimal random guessing inference accuracy is 50%. We also notice a tradeoff among robustness/utility and attribute privacy, as demonstrated in our theorems. That is, a more robust/accurate model leaks more attribute privacy, and vice versa. 5.3 Results on the Real-World Datasets Datasets and setup. We use three real-world datasets from different applications, i.e., the widely-used CelebA (Liu et al., 2015) image dataset (150K training images and 50K for testing) to study attribute privacy protection (Li et al., 2021), the Loans (Hardt et al., 2016), and Adult Income (Dua & Graff, 2017) datasets. For the CelebA dataset, we treat binary ‘gender’ as the private attribute, and detect ‘gray hair’ as the primary (binary classification) task, following Li et al. (2021); Osia et al. (2020b). For the Loans dataset, the primary task is to accurately predict the affordability of the person asking for the loan while protecting their race. Finally, for the Adult Income dataset, predicting whether the income of a person is above $50,000 or not is the primary task. The private attributes are the gender and the marital status. For \( l_\infty \) perturbations, we set the budget \( \epsilon = 0.01 \) for Loans and Adults, and 0.1 for CelebA. We use 10 PGD attack steps with step size 0.1. Results. Tables 1 shows the results on the three datasets, where we report the robust accuracy (under the \( l_\infty \) attack), normal test accuracy, and attribute inference accuracy (as well as the gap to random guessing). We have the following observations: 1) When \( \alpha = 0 \), it means ARPRL only focuses on learning robust representation (similar to Zhu et al., 2020) and obtains the best robust accuracy. However, the inference accuracy is rather high, indicating a serious privacy leakage. 2) Increasing \( \alpha \) can progressively better protect the attribute privacy, i.e., the inference accuracy is gradually reduced and finally close to random guessing (note different datasets have different random guessing value). 3) \( \alpha \) and \( \beta \) together act as the tradeoff among robustness, utility, and privacy. Particularly, a better privacy protection (i.e., larger \( \alpha \)) implies a smaller test accuracy, indicating an utility and privacy tradeoff, as validated in Theorem 3. Similarly, a better privacy protection also implies a smaller robust accuracy, indicating a robustness and privacy tradeoff, as validated in Theorem 4. Visualization. We further visualize the learnt representations via t-SNE in Figure 3. We can see that: When only focusing on learning robust representations, both the data with different labels and with different attribute values can be well separated. On the other hand, when learning both robust and privacy-preserving representations, the data with different labels can be separately, but they are mixed in term of the attribute values—meaning the privacy of attribute values is protected to some extent. Runtime. We only show runtime on the largest CelebA (150K training images). In our used platform, it took about 5 mins each epoch (about 15 hours in total) to learn the robust and privacy-preserving representation for each hyperparameter setting. The computational bottleneck is mainly from training robust representations (where we adapt the source code from Zhu et al. (2020)), which occupies 60% of the training time (e.g., 3 mins out of 5 mins in each epoch). Training the other neural networks is much faster. Table 1: Test accuracy, robust accuracy, vs. inference accuracy (and gap w.r.t. the optimal random guessing) on the considered three datasets and private attributes. Note that some datasets are unbalanced, so the random guessing values are different. Larger $\alpha$ means more privacy protection, while larger $\beta$ means more robustness against adversarial perturbation. $\alpha = 0$ means no privacy protection and only focuses on robust representation learning, same as [Zhu et al., 2020; Zhou et al., 2022]. | CelebA | Loans | |--------|-------| | Private attr.: Gender (binary), budget $\epsilon = 0.1$ | Private attr.: Race (binary), budget $\epsilon = 0.01$ | | $\alpha$ | $\beta$ | Rob. Acc | Test Acc | Infer. Acc (gap) | $\alpha$ | $\beta$ | Rob. Acc | Test Acc | Infer. Acc (gap) | | 0 | 0.50 | 0.87 | 0.91 | 0.81 (0.31) | 0 | 0.50 | 0.45 | 0.74 | 0.92 (0.22) | | 0.1 | 0.45 | 0.84 | 0.88 | 0.75 (0.25) | 0.05 | 0.475 | 0.42 | 0.69 | 0.75 (0.05) | | 0.5 | 0.25 | 0.79 | 0.85 | 0.62 (0.12) | 0.10 | 0.45 | 0.40 | 0.68 | 0.72 (0.02) | | 0.9 | 0.05 | 0.71 | 0.81 | 0.57 (0.07) | 0.15 | 0.425 | 0.39 | 0.66 | 0.71 (0.01) | | Adult income | Adult income | |--------------|--------------| | Private attr.: Gender (binary), budget $\epsilon = 0.01$ | Private attr.: Marital status (7 values), budget $\epsilon = 0.01$ | | $\alpha$ | $\beta$ | Rob. Acc | Test Acc | Infer. Acc (gap) | $\alpha$ | $\beta$ | Rob. Acc | Test Acc | Infer. Acc (gap) | | 0 | 0.5 | 0.63 | 0.68 | 0.88 (0.33) | 0 | 0.5 | 0.56 | 0.71 | 0.70 (0.14) | | 0.05 | 0.475 | 0.57 | 0.67 | 0.72 (0.17) | 0.001 | 0.495 | 0.55 | 0.65 | 0.60 (0.04) | | 0.10 | 0.45 | 0.55 | 0.65 | 0.59 (0.04) | 0.005 | 0.49 | 0.52 | 0.60 | 0.59 (0.03) | | 0.20 | 0.4 | 0.53 | 0.63 | 0.55 (0.00) | 0.01 | 0.45 | 0.47 | 0.59 | 0.57 (0.01) | 5.4 Comparing with the State-of-the-Arts Comparing with task-known privacy-protection baselines. We compare ARPRL with two recent task-known methods for attribute privacy protection on CelebA: DPFE ([Osia et al., 2020b] that also uses mutual information but in different ways) and Deepobfuscator ([Li et al., 2021]) that is adversarial training based defense. Specifically, we ensure the three methods have the same test accuracy 0.88, and compare the attribute inference accuracy. For fair comparison, we do not consider adversarial robustness in our ARPRL. The attribute inference accuracy of DPFE and Deepobfuscator are 0.79 and 0.70, respectively, and our ARPRL’s is 0.71. First, DPFE performs much worse because it assumes the distribution of the learnt representation to be Gaussian (which could be inaccurate), while Deepobfuscator and ARPRL do not have any assumption on the distributions; Second, Deepobfuscator performs slightly better than ARPRL. This is because both ARPRL and Deepobfuscator involve adversarial training, Deepobfuscator uses task labels, but ARPRL is task-agnostic, hence slightly sacrificing privacy. Comparing with task-known adversarial robustness baselines. We compare ARPRL with the state-of-the-art task-known adversarial training based TRADES ([Zhang et al., 2019]) and test on CelebA, under the same adversarial perturbation and without privacy-protection (i.e., $\alpha = 0$). For task-agnostic ARPRL, its robust accuracy is 0.87, which is slightly worse than TRADES’s is 0.89. However, when ARPRL also includes task labels during training, its robust accuracy increases to 0.91—This again verifies that adversarially robust representations based defenses outperform the classic adversarial training based method. Comparing with task-known TRADES + Deepobfuscator for both robustness and privacy protection. A natural solution to achieve both robustness and privacy protection is by combining the SOTAs that are individually adversarially robust or privacy-preserving. Here, we test TRADES + Deepobfuscator on CelebA. By tuning the tradeoff hyperparameters, we obtain the best utility, privacy, and robustness tradeoff of TRADES + Deepobfuscator as: (Robust Acc, Test Acc, Infer. Acc) = (0.79, 0.84, 0.65). In contrast, the best tradeoff of ARPRL in Table 1 is (Robust Acc, Test Acc, Inference Acc) = (0.79, 0.85, 0.62), which is slightly better than TRADES + Deepobfuscator, though they both know the task labels. The results imply that simply combining SOTA robust and privacy-preserving methods is not the best option. Instead, our ARPRL learns both robust and privacy-preserving representations under the same information-theoretic framework. 6 Conclusion and Future Work In this paper, we aim to ensure machine learning models to be robust against adversarial examples and protect sensitive attributes in the data. We achieve the goal by proposing ARPRL, which learns adversarially robust, privacy preserving, and utility preservation representations under a unified information-theoretic framework. We also derive theoretical results that show the inherent tradeoff between robustness/utility and privacy and guarantees of attribute privacy against the worst-case attribute inference adversary. ARPRL is also shown to outperform the state-of-the-arts via empirical evaluations. Future works include 1) generalizing the results to other well-known security attacks such as data poisoning attack and backdoor attack, and other well-known privacy attacks such as membership inference attack and data reconstruction attack; 2) evaluating ARPRL on other data modalities such as audio, speech, and natural language; 3) generalizing theoretical results to multi-value attributes and provable robustness guarantees. REFERENCES Chatgpt. [https://chat.openai.com/](https://chat.openai.com/) developed by OpenAI. Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In *CCS*, 2016. Alexander A Alemi, Ian Fischer, Joshua V Dillon, and Kevin Murphy. Deep variational information bottleneck. In *ICLR*, 2017. Yoshinori Aono, Takuya Hayashi, Lihua Wang, and Shiho Moriai. Privacy-preserving deep learning: Revisited and enhanced. In *ATIS*, 2017. AWS Marketplace. [https://aws.amazon.com/marketplace/solutions/machine-learning/pre-trained-models/](https://aws.amazon.com/marketplace/solutions/machine-learning/pre-trained-models/) Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and Devon Hjelm. Mutual information neural estimation. In *ICML*, 2018. Martin Bertran, Natalia Martinez, AfroditI Papadaki, Qiang Qiu, Miguel Rodrigues, Galen Reeves, and Guillermo Sapiro. Adversarially learned representations for information obfuscation and inference. In *ICML*, 2019. Gerda Bortsova, Cristina González-Gonzalo, Suzanne C Wetstein, Florian Dubost, Ioannis Katramados, Laurens Hogeweg, Bart Liefers, Bram van Ginneken, Josien PW Pluim, Mitko Veta, et al. Adversarial attack vulnerability of medical image analysis systems: Unexplored factors. *Medical Image Analysis*, 2021. Chris Calabro. *The exponential complexity of satisfiability problems*. University of California, San Diego, 2009. Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In *IEEE S & P*, 2017. Pengyu Cheng, Weituo Hao, Shuyang Dai, Jiachang Liu, Zhe Gan, and Lawrence Carin. Club: A contrastive log-ratio upper bound of mutual information. In *ICML*, 2020. Clarifai. [https://www.clarifai.com/demo](https://www.clarifai.com/demo) July 2019. Jeremy M Cohen, Elan Rosenfeld, and J Zico Kolter. Certified adversarial robustness via randomized smoothing. In *ICML*, 2019. Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In *ICML*, 2020. Yinpeng Dong, Zhijie Deng, Tianyu Pang, Jun Zhu, and Hang Su. Adversarial distributional training for robust deep learning. In *NeurIPS*, 2020. Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL [http://archive.ics.uci.edu/ml](http://archive.ics.uci.edu/ml). Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. Robust physical-world attacks on deep learning visual classification. In *CVPR*, 2018. Alison L Gibbs and Francis Edward Su. On choosing and bounding probability metrics. *International statistical review*, 70(3):419–435, 2002. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In *NIPS*, 2014. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In *ICLR*, 2015. Jihun Hamm. Minimax filter: Learning to preserve privacy from inference attacks. *JMLR*, 2017. Moritz Hardt, Eric Price, and Nathan Srebro. Equality of opportunity in supervised learning. In *Proceedings of the 30th International Conference on Neural Information Processing Systems*, NIPS’16, pp. 3323–3331, Red Hook, NY, USA, 2016. Curran Associates Inc. ISBN 9781510838819. R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In *ICLR*, 2019.
Uj2Wjv0pMY
Since this paper is about the new dataset which is claimed to focus on error recognition, however there not much new insights about the significance of bringing more error videos to procedural video dataset: neither in the way data is captured or significant baselines, experiments to demonstrate why it matters?
Put on your detective hat: What’s wrong in this video? A Dataset for Error Recognition in Procedure Videos Anonymous authors Paper under double-blind review Abstract Following step-by-step procedures is an essential component of various activities carried out by individuals in their everyday lives. These procedures serve as a guiding framework that helps achieve goals efficiently, whether assembling furniture or preparing a recipe. However, the complexity and duration of procedural activities inherently increase the likelihood of making errors. Understanding such procedural activities from a sequence of frames is a challenging task that demands an accurate interpretation of visual information and an ability to reason about the structure of the activity. To this end, we collected a new egocentric 4D dataset comprising 384 recordings (94.5 hrs) of people performing recipes in kitchen environments. This dataset consists of two distinct activity types: one in which participants adhere to the provided recipe instructions and another where they deviate and induce errors. We provide 5.3K step annotations and 10K fine-grained action annotations\(^1\) for 20% of the collected data and benchmark it on the following tasks: error recognition, multi-step localization and procedure learning. 1 Introduction Remember when you prepared your favourite meal after a long day and missed adding that crucial ingredient and then lost your appetite after a few bites? Such scenarios are quite common because performing long-horizon step-by-step procedural activities increases the likelihood of making errors. These errors can be harmless, provided they can be rectified with little consequence. Nonetheless, when the procedures in question pertain to the medical field or complex chemical experiments, the cost of errors can be substantial. Therefore, there is a pressing need for building AI systems that can guide users in performing procedural activities (Draper, 2021). A key problem we need to solve in order to build such AI systems is procedural activity understanding, a challenging and multi-faceted task that demands interpreting what is happening — specifically, determining whether the person is following the procedure correctly or making an error, anticipating what will happen, and planning the course of action to accomplish the goal. For a system to interpret what is happening, it needs to recognize and segment actions while assessing the current state of the environment (Elhamifar & Huynh, 2020b; Yue Yang et al., 2021; Mengmeng Wang et al., 2021; Xudong Lin et al., 2022; Dvornik, Nikita et al., 2022). To anticipate future events, the system should be able to predict actions at the beginning of an interaction or even beforehand (Damen et al., 2021; Rohit Girdhar & Kristen Grauman, 2021). On the other hand, planning a sequence of actions requires the system to understand the possible outcomes of these interactions (Chang et al., 2019b; Henghui Zhao et al., 2022; Jing Bi et al., 2021). A number of datasets have been introduced to facilitate the understanding of procedural activity. Most of these datasets contain only normal videos of humans performing correct procedures. For an AI system to recognize errors in human procedures, datasets with error annotations are very necessary. Recently, there has been a significant rise in the number of procedural datasets containing errors that have been introduced, most of them primarily focussed on identifying and addressing the errors that occur during assembly (Table 1). In this work, we present a novel dataset to aid AI systems that solve the procedural activity understanding task, focusing specifically on improving their ability to recognize and anticipate errors. We \(^1\)website: https://error-anonymous-dataset.github.io/ErrorAnonymous/ Table 1: Ours vs Current Procedural Datasets (with and without errors) Our dataset not only enhances the study of tasks outlined in procedural activity datasets in existing literature but also enables a systematic investigation of errors occurring during the performance of procedural activities. | Errors | Dataset Name | Domain | Ego | Depth | Recorded | Error Labels | Errors Type | Videos | Hours | Tasks | |--------|--------------|--------|-----|-------|-----------|--------------|-------------|--------|-------|-------| | | YoCook [Zhou et al., 2017] | Cooking | ✗ | ✗ | ✗ | - | - | 2000 | 176 | 89 | | | Salad [Salazar et al., 2013] | Cooking | ✓ | ✓ | ✓ | - | - | 50 | 4.5 | 2 | | | EGTEA [Garey et al., 2015] | Cooking | ✓ | ✓ | ✓ | - | - | 86 | 29 | 7 | | | MPII Cooking [Kemker et al., 2015] | Cooking | ✓ | ✓ | ✓ | - | - | 273 | 27 | 67 | | | EgoProc [Bansal, Siddhant et al., 2022] | Assembly | ✓ | ✓ | ✓ | - | - | 329 | 62 | 16 | | | Breakfast [Kuchma et al., 2014] | Cooking | ✓ | ✓ | ✓ | - | - | 1712 | 77 | 10 | | ✓ | EgoTV [Keshi Haara, 2022] | Simulated | ✓ | ✓ | - | ✓ | Intentional | 7673 | 168 | 540 | | | Assembly-101 [Fadime Sener et al., 2022] | Toy Assembly | ✓ | ✓ | ✓ | Partial* | Unintentional | 447 | 53 | 101 | | | CSV [Qian et al., 2022] | Chemistry Lab | ✓ | ✓ | ✓ | - | Intentional | 1940 | 11.1 | 14 | | | HoloAssist [Wane et al., 2023] | Assembly* | ✓ | ✓ | ✓ | - | Unintentional | 2221 | 166 | 350 | | | Industrial [Schoonbeek et al., 2024] | Toy Assembly | ✓ | ✓ | ✓ | ✓ | Int. and Unint. | 84 | 5.8 | 36 | | | ATA [Hooddoossian et al., 2023] | Toy Assembly | ✓ | ✓ | ✓ | ✓ | Intentional | 1152 | 24.8 | 3 | | ✓ | Ours | Cooking | ✓ | ✓ | ✓ | ✓ | Int. and Unint. | 384 | 94.5 | 24 | selected cooking as a domain that is sufficiently complex and encompasses different kinds of errors that are compounding in nature and completely alter the current state of the environment with no point of return. We decided to capture data from an egocentric view despite ego motions because it helps minimize occlusions more effectively than third-person videos. This paper makes the following contributions: 1) We collected an egocentric 4D dataset that features individuals following recipes in kitchen settings. Our dataset includes two distinct types of activities: one where the participants precisely follow the given recipe guidelines and another where they deviate, making errors. 2) We provide annotations for (a) Start/End times for each step of the recipe, (b) Start/End times for each action/interaction for 20% of the collected data, and (c) Categorize and provide a detailed description of the error performed by a participant which enabled us to gather a comprehensive overview of different error types and their concise explanations. 3) We provide baselines for the following procedure understanding tasks: supervised error recognition, multi-step localization and procedure learning. 2 RELATED WORK Our dataset is distinguished by four key features: (1) the inclusion of multi-step activities, (2) an egocentric viewpoint, (3) multimodal capabilities, and (4) a diverse set of errors. In Table 1, we offer a comparative analysis with existing datasets, and in the rest of the section, we elaborate on how our dataset is particularly relevant to the various tasks of interest. Error Recognition. Given a video clip, error recognition involves identifying errors present in the clip. This task was initially introduced as mistake detection by Assembly-101 [Fadime Sener et al., 2022] and proposed a 3-class classification on the performed procedure to classify the clip as either correct, mistake, or correction. Anomaly detection, while closely related to error recognition, differentiates itself by utilizing static cameras and backgrounds to identify unusual or abnormal behavior. Our dataset, encompassing a variety of error types, including timing, preparation, temperature, technique, and measurement mishaps, provides researchers with a comprehensive view of error patterns in diverse situations. Cooking is a task that involves continuous changes in the shape and color of ingredients, unlike assembly tasks that usually lack variation. This unique characteristic of cooking activity makes our dataset particularly valuable for developing error recognition methods applicable to procedural tasks in the medical sector or that involve performing chemical experiments. Temporal Action Localization. (TAL) aims to identify temporal boundaries in extended videos and classify each action instance. Broadly, TAL methodologies fall into two categories: two-stage and single-stage approaches. The two-stage method first generates action proposals and then classifies these actions. In contrast, the single-stage approach conducts simultaneous action localization and classification. Several datasets, such as ActivityNet [Fabian Caba Heilbron & Niebles, 2015], THUMOS14 [Jiang et al., 2014], Charades [Sigurdsson et al., 2016], MultiTHUMOS [Yeung et al., 2017], AVA [Gu et al., 2017], EPIC-KITCHENS [Damen et al., 2021], and Ego4D [Grauman et al., 2021], have significantly advanced the field of TAL. While our dataset may be smaller in comparison, it offers a unique feature: it includes both normal actions and erroneous actions. This makes it especially valuable for evaluating TAL methods’ robustness in handling actions with deviations. **Procedure Learning.** is a two-part process where all video frames are first segregated into K significant steps. Then, a logical sequence of the steps necessary to complete the task is identified (Eslamiifar & Huynh, 2020a; Huang et al., 2016; Chang et al., 2019a; Bojanowski et al., 2014; Sener & Yao, 2019; Zhou et al., 2018). Existing procedural activity datasets like CrossTask (Zhukov et al., 2019), COIN (Tang et al., 2019) are predominantly third-person videos; in this light, EgoProceL dataset (Bansal, Siddhant et al., 2022) was compiled from videos of CMU-MMAC (De la Torre et al., 2008), EGTEA (Fathi et al., 2011b), EPIC-Tents (Jang et al., 2019), MECCANO (Ragusa et al., 2020). We observe that our dataset features a greater average step length, posing a substantially more challenging problem for algorithms developed using existing egocentric procedure learning datasets. ### 3 DATA COLLECTION ![Figure 1](image) Figure 1: (a-b) display the sensor configuration for recording that includes a GoPro mounted over a HoloLens and a participant making the recipe *Cucumber Raita*, and (c-f) display the synchronized data captured by the HoloLens2 including 3D hand joints, depth, RGB and camera trajectory. **Sensors.** In order to gather activity data, we employed a combination of the GoPro Hero 11 camera, which was mounted on the user’s head, and the HoloLens2 device. To facilitate data collection from the HoloLens2, including its depth sensor, IMU (Inertial Measurement Unit), front RGB camera, and microphone, we utilized a custom tool developed by (Dibene, Juan C. & Dunn, Enrique, 2022). Furthermore, we captured the processed head and hand tracking information provided by the HoloLens2 device. We offer data recorded from HoloLens2 and GoPro, presented separately for each recording. It is important to note that the data from GoPro and HoloLens2 are not synchronized. Figure 1 illustrates the data captured from HoloLens2. **Recipes.** We curated a selection of 24 cooking recipes sourced from WikiHow (Table 8), specifically focusing on recipes with a preparation time of 30 minutes or less. These recipes encompassed a wide range of culinary traditions, showcasing the diversity of cooking styles across various cuisines. Our main goal was to identify potential errors that could occur when using different cooking tools to prepare recipes sampled from various cuisines. **Task Graphs.** A task graph visually represents the sequential steps required to accomplish a given recipe. Each node in the task graph (for a recipe) corresponds to a step in a recipe, and a directed edge between a node $x$ and a node $y$ in the graph indicates that $x$ must be performed before $y$. Thus, a task graph is a directed acyclic graph, and a topological sort over it represents a valid completion of the recipe. In order to construct task graphs for our collection of 24 WikiHow recipes, we meticulously identified all the essential steps involved and established their inter-dependencies, thereby establishing a topological order of tasks (see Appendix F for details about constructed task graphs). ### 3.1 PROTOCOL Our dataset was compiled by 8 participants across 10 distinct kitchens. Each participant selected ten recipes and recorded, on average, 48 videos across 5 different kitchens. During filming, all participants were required to ensure that they were alone in the kitchen and remove any items that could potentially identify them, such as personal portraits, mirrors, and smartwatches with portraits. The participants used a GoPro and a HoloLens2 to record and monitor their footage. Each participant was provided with a tablet-based recording interface accessible through a web browser. To ensure optimal video quality, we asked the participants to configure the GoPro camera such that it captures videos in 4K resolution at 30 frames per second. The HoloLens2 device was programmed to stream RGB frames at a 360p resolution and a rate of 30 frames per second. It also streamed depth frames in Articulated Hand Tracking mode, referred to as “depth_ahat” mode. The device also streamed three separate IMU sensor data streams and spatial data, including both head and hand poses. ### 3.1.1 Normal Recordings A recording is categorized as a **normal recording** when it is captured as the participant accurately follows the procedure outlined in the recipe. Each participant in the study is tasked with selecting a recipe from the available options, which are scheduled within a kitchen setup using the recording interface. Subsequently, they are presented with one of the pre-established topological orders of the recipe, as determined by the previously constructed task graphs (see Appendix F). Participants then proceed to follow the provided task graph, commencing from the beginning and progressing through each step in accordance with its dependencies and designated timing (see Figure ??). ### 3.1.2 Error Recordings A recording is termed an **error recording** when it is captured while the individual deviates from the recipe’s procedure, thereby inducing errors. Following the terminology used in scientific disciplines such as neuroscience (Chevignard et al., 2010) and chemistry, we will refer to deviations from procedures as **errors**. Note that the term “errors” used here is equivalent to what is commonly called “mistakes” in the AI community (c.f. (Fadime Sener et al., 2022)). Following (Chevignard et al., 2010; Finnanger et al., 2021; Fogel et al., 2020), we classified common errors performed during a cooking activity into the following categories (1) Preparation Error, (2) Measurement Error, (3) Technique Error, (4) Timing Error, (5) Temperature Error, (6) Missing Steps, and (7) Ordering Errors (see Figure 18 in Appendix). We also provide visual illustrations in Figure 2 showcasing the categorization of videos into normal and error recordings. We devised and implemented three strategies for the participants to follow. Each participant was asked to pick a strategy for performing the recipe in a particular environment and was accordingly guided in preparing for their performance. We list the strategies presented to the participants here: 1. **Pre-prepared error scripts**: In this strategy, participants were given pre-prepared error scripts with missing steps and ordering errors. 2. **Prepare error scripts**: Once participants chose this strategy, they were given a web-based interface to create an error script for each error recipe recording and displayed the modified error script on a tablet, enabling participants to perform according to their modified error scripts (3) **Impromptu**: During the later stages of the recording process, we implemented a strategy where participants were asked to induce errors as they perform the recipe. Following the completion of each recording, participants were given access to a web-based interface to update errors they made during each step. Although we developed a process to capture intentional errors by preparing error scripts, many errors were unintentional (Figure 3 presents such an example). ### 3.2 Data Annotation Our annotations comprise (1) Annotations for coarse-grained actions or steps, providing the start and end times for each step within the recorded videos. (2) To support learning semi/weakly supervised approaches for action recognition and action anticipation, we have provided fine-grained action annotations for 20% of the recorded data. These annotations include the start and end times for each fine-grained action. (3) We have also categorized and provided error descriptions for the induced errors. These error descriptions are associated with the corresponding step in the provided annotations, allowing for a comprehensive understanding of the errors. Figures 3 describe the granularity of different categories of annotations provided. To ensure high-quality annotations for our data, we ensured that each recording was annotated by the person who recorded the video and then reviewed by another. The reviewer was asked to double-check that all errors made by the participant in the recording were included in their corresponding step annotations. **Coarse-Grained Action/Step Annotations.** We designed an interface for performing step annotations in Label Studio[^2]. Each annotator is presented with this interface to mark the start and end times for each step. Our steps are significantly longer than a single fine-grained action and encompass multiple fine-grained actions necessary for performing the described step. For example, in order to accomplish the step *Chop a tomato*, we include the following (1) Pre-conditional actions of *[opening refrigerator, grabbing a polythene bag of tomatoes, taking a tomato, placing the tomato on cutting board, close fridge]* (2) Post-conditional actions of *[placing down the knife, grabbing the polythene bag of tomatoes, open fridge and place the bag in the fridge]*. Table 2 summarizes and compares coarse-grained action/step annotations across relevant datasets. **Fine-Grained Action Annotations.** Inspired by the pause-and-talk narrator ([Damen et al., 2020](#)), we have designed and developed a web-based tool for fine-grained action annotation that utilizes --- [^2]: [https://labelstud.io/](https://labelstud.io/) | Dataset | $T_{avg}$ (min) | $N_{seg}$ | $N_{avg}$ | $T_{avg}$ (sec) | |-------------|-----------------|-----------|-----------|-----------------| | 80Salads | 6.4 | 899 | 18 | 36.8 | | Breakfast | 2.3 | 11,300 | 6.6 | 15.1 | | Assembly 101| 7.1 | 9523 | 24 | 16.5 | | CSV | 0.2 | 18488 | 9.53 | 2.1 | | HoloAssist | 4.48 | 15927 | 7.17 | 39.3 | | Ours (Total)| 14.8 | 5300 | 13.8 | 52.78 | --- [^1]: [https://labelstud.io/](https://labelstud.io/) Whisper (Radford et al., 2023) for speech-to-text translation. We will release the developed web-based annotation tool as part of our codebase upon acceptance. 4 BASELINES We provide baselines for the following tasks (1) Error Recognition, (2) Multi-Step Localization, and (3) Procedure Learning. In our approach to Error Recognition and Multi-Step Localization tasks, we utilized state-of-the-art pre-trained models originally developed for video recognition tasks to extract relevant features. Once these features were extracted, we proceeded to train distinct heads, each tailored to address a specific task at hand. We used 3D-ResNet (Hara et al., 2017), SlowFast (Feichtenhofer et al., 2019), X3D (Feichtenhofer, 2020), VideoMAE (Tong et al., 2022) and Omnivore (Girdhar et al., 2022) as our backbones for extracting features. 4.1 ERROR RECOGNITION Supervised Error Recognition. Error Recognition aims to identify errors within specific segments of a long, untrimmed video that depicts a procedural activity. Here, each segment represents a step in the procedure. We note that the task is challenging due to the presence of a diverse set of errors (summarized in Figures 18 and 19). We set up Error Recognition as a supervised binary classification task, categorizing each step into one of two classes: \{error, normal\}. We evaluated our trained models using standard metrics for binary classification such as Precision, Recall, F1 Score, and AUC Score, and presented results in Table 3. Firstly, we used data constructed by splitting based on recordings (as described in G.1) to prepare training, validation and testing data. Then, we compiled annotated video segments corresponding to the steps of the recipes for each part to prepare a comprehensive dataset which includes 4026 training segments (with 1283 errors), 531 validation segments (179 errors), and 743 testing segments (227 errors). We used the error categorization of each step to generate binary labels for the compiled video segments. We utilized pre-trained video recognition models to extract features. However, to maintain a fixed-size input to these models, we divided each segment into 1-second sub-segments. Each sub-segment was given the same class label as its parent segment. We used the extracted features to train a neural network with a single hidden layer with ReLU activation and a sigmoid output node. We assigned the majority class among the sub-segments to the entire segment during the inference phase. We trained all classifiers on an NVIDIA A40 GPU using Adam optimizer and set the learning rate to 0.001. Table 3 presents the results obtained for error recognition on our dataset. We observe that our Omnivore (Girdhar et al., 2022)-based model achieves the best recall, F1 and AUC scores. However, the scores are pretty low, which underscores the challenging nature of the task. We also present qualitative results of trained classifiers in figure 5. --- Table 3: Supervised Error Recognition | Baseline | Precision | Recall | F1 Score | AUC Score | |--------------|-----------|--------|----------|-----------| | 3D ResNet | 76.74 | 14.54 | 24.44 | 0.78 | | SlowFast | 64.42 | 29.52 | 40.48 | 0.78 | | X3D | 52.78 | 16.74 | 25.42 | 0.72 | | VideoMAE | 75.34 | 25.7 | 38.33 | 0.82 | | Omnivore | 68.24 | 44.49 | 53.87 | 0.84 | --- We also developed methods for solving the zero-shot error recognition task (namely, training data contains only normal recordings and test data has both error and normal recordings) by adapting anomaly detection methods in the literature. However, we found that these methods perform poorly and are only slightly better than random (results are presented in the supplement). These results suggest that zero-shot error recognition is quite challenging and will require methods that seek to understand the context and meaning of errors. Figure 5: Displays error probabilities predicted by trained classifiers on 4 segments of the video (3 error segments and 1 normal segment) sampled from the compiled test dataset. Although our omnivore-based model outperforms the rest in classifying error segments, we note that all models are adept at distinguishing normal video segments. **Early Error Recognition.** In this task, we aim to identify errors within segments of a procedural activity when only the first half of the segment is provided as input to the model. Thus, we re-use the datasets compiled for supervised error recognition and train task prediction heads. The results in Table 4 are consistent with supervised error recognition, where our omnivore-based model outperforms other models. We note that the scores for early error recognition are generally lower compared to the error recognition setting, indicating that recognizing errors with less information is a significantly harder setting. We conjecture that to improve these scores significantly, one must employ methods that seek to (semantically) understand the context, meaning, and cause of various errors. **Error Category Recognition.** In this approach, we frame Error Category Recognition as a binary classification task, discerning between errors and non-errors across all error types. We iterate through each error type and construct a dataset where it is designated as the error class, while all other error categories and correct instances are categorized as correct. Table 5 presents the performance metrics for five models, each trained using a distinct pre-trained feature extractor corresponding to different error categories. Despite achieving high accuracy scores, a closer examination of the recall values reveals limitations in the models’ ability to identify different types of errors accurately. Additionally, this analysis helps determine the relative hardness of detecting different types of errors. | Method Name | Technique Error | Preparation Error | Measurement Error | Temperature Error | Timing Error | |-------------|-----------------|-------------------|-------------------|------------------|-------------| | | Acc. | Prec. | Rec. | F1 | Acc. | Prec. | Rec. | F1 | Acc. | Prec. | Rec. | F1 | Acc. | Prec. | Rec. | F1 | | 3D ResNet | 88.56 | 27.91 | 18.18 | 22.02 | 89.77 | 9.30 | 9.76 | 9.52 | 88.43 | 6.98 | 6.12 | 6.52 | 93.14 | 0.00 | 0.00 | 0.00 | 91.52 | 6.98 | 11.54 | 8.70 | | Slowfast | 82.50 | 19.23 | 30.30 | 23.53 | 83.45 | 10.58 | 26.83 | 15.17 | 82.10 | 9.62 | 20.41 | 13.07 | 85.20 | 0.96 | 12.50 | 1.79 | 84.12 | 5.77 | 23.08 | 9.23 | | X3D | 83.31 | 9.72 | 10.61 | 10.14 | 87.21 | 12.50 | 21.95 | 15.93 | 85.33 | 8.33 | 12.24 | 9.92 | 89.23 | 0.00 | 0.00 | 0.00 | 87.62 | 4.17 | 11.54 | 6.12 | | VideoMAE | 84.39 | 19.18 | 22.22 | 20.59 | 86.99 | 13.70 | 27.03 | 18.18 | 84.39 | 8.22 | 12.77 | 10.88 | 85.58 | 1.37 | 12.50 | 2.47 | 87.14 | 6.85 | 19.23 | 10.10 | | Omnivore | 78.20 | 17.57 | 39.39 | 24.30 | 80.22 | 14.19 | 51.22 | 22.22 | 78.33 | 12.16 | 36.73 | 18.27 | 79.81 | 2.03 | 37.50 | 3.85 | 79.00 | 6.08 | 34.62 | 10.34 | ### 4.2 Multi Step Localization **Description.** Given an untrimmed, long video that captures a procedural activity, multi-step localization aims to determine each step’s start and end frames and classify them. We’ve framed the supervised multi-step localization task as an instance of a supervised temporal action localization (TAL) problem. This setup is particularly challenging as our dataset encompasses both normal actions and those with deviations, termed “Technique Errors” (refer to [18]), and the duration of steps in our dataset exceeds that of actions in benchmark datasets used for TAL (Table 2). We employ standard metrics used in TAL methods to evaluate trained models and present results. These metrics include temporal Intersection over Union (tIoU), mean Average Precision (mAP), and Recall at x (R@x). Figure 6: Compares the multi-step localization results for 4 recipes trained using the omnivore-based model on data split constructed using recording environments as a criterion. Each normal/error recording is sampled from the test set containing only normal/error recordings. Top part shows the ground truth segments and the bottom part shows the predicted segments. **Implementation Details.** We used three different splits of the dataset constructed based on the recording environments ($\mathcal{E}$), recording persons ($\mathcal{P}$) and recordings ($\mathcal{R}$) (as described in G.1). For each of these splits, we extracted features from chosen pre-trained video recognition models. For each of the resulting 12 datasets, we trained an ActionFormer head (Zhang et al., 2022). We modified the default configuration file and set the following hyper-parameters: num_classes to 353, input_dim to 1024, max_seq_len to 4096, learning rate to 0.0001 and trained all 12 models for 16 epochs. During inference, we have further split each test set into two distinct sets where one set contains only normal recordings ($\mathcal{T}_n$), and the other set contains only error recordings ($\mathcal{T}_e$). Table 13 presents detailed results obtained on the evaluation of trained models on distinct test sets constructed based on the presence of errors. We observe that among all the feature extractors used as backbones for training the ActionFormer head, Omnivore performs much better. In Appendix G.3, we present further benchmarking results where we evaluate models on a combined test set and perform an ablation study on the performance of trained models for features extracted using varying lengths. It can be seen that all the models perform significantly worse on test sets constructed using only error recordings compared to the ones constructed using normal ones. We also present qualitative results in figure 6. | $B$ | $D$ | $\mathcal{T}$ | $\mathcal{I}_t = 0.1$ | $\mathcal{I}_t = 0.3$ | $\mathcal{I}_t = 0.5$ | |-----|-----|---------------|------------------|------------------|------------------| | | | | mAP R@1 R@5 | mAP R@1 R@5 | mAP R@1 R@5 | | 3D ResNet | $\mathcal{E}$ | $\mathcal{T}_n$ | 21.4 39.51 54.39 | 20.07 35.69 50.74 | 17.1 29.36 45.3 | | | | $\mathcal{T}_e$ | 9.74 15.31 23.2 | 8.31 12.69 21.45 | 6.22 9.08 16.57 | | | $\mathcal{P}$ | $\mathcal{T}_n$ | 19.57 35.93 49.39 | 18.68 33.2 47.44 | 15.99 27.58 43.22 | | | | $\mathcal{T}_e$ | 13.82 27.14 39.57 | 12.94 23.4 37.32 | 10.88 19.21 33.64 | | | $\mathcal{R}$ | $\mathcal{T}_n$ | 20.03 35.18 47.57 | 19.15 32.34 46.09 | 16.69 27.04 41.52 | | | | $\mathcal{T}_e$ | 13.22 25.96 37.84 | 12.48 23.47 36.07 | 10.8 19.5 31.76 | | Slowfast | $\mathcal{E}$ | $\mathcal{T}_n$ | 22.48 39.57 54.14 | 20.86 35.97 50.51 | 17.2 28.28 44.75 | | | | $\mathcal{T}_e$ | 10.11 16.16 23.32 | 9 13.02 20.39 | 7.53 9.54 15.83 | | | $\mathcal{P}$ | $\mathcal{T}_n$ | 23.12 36.55 50.45 | 22.09 34.09 49.11 | 19.24 28.93 45.12 | | | | $\mathcal{T}_e$ | 14.78 26.68 39.97 | 14.14 24.73 37.71 | 12.56 21.76 34.37 | | | $\mathcal{R}$ | $\mathcal{T}_n$ | 22.78 36.46 50.1 | 22.03 34.35 48.13 | 19.62 30.08 44.88 | | | | $\mathcal{T}_e$ | 14.11 27.52 39.19 | 13.34 24.91 37.19 | 11.9 21.53 32.39 | | VideoMAE | $\mathcal{E}$ | $\mathcal{T}_n$ | 24.44 38.22 52.48 | 22.97 34.77 49.51 | 18.67 28.57 42.68 | | | | $\mathcal{T}_e$ | 7.53 13.54 20.52 | 6.93 11.4 18.36 | 5.63 8.55 15.13 | | | $\mathcal{P}$ | $\mathcal{T}_n$ | 26.78 37.43 46.28 | 25.68 34.79 44.6 | 22.02 29.43 39.81 | | | | $\mathcal{T}_e$ | 16.98 27.43 37.76 | 16.46 25.53 36.03 | 14.64 22.03 32.07 | | | $\mathcal{R}$ | $\mathcal{T}_n$ | 26.27 37.15 46.93 | 24.71 34.06 45.03 | 21.51 29.36 40.44 | | | | $\mathcal{T}_e$ | 15.43 25.94 33.97 | 14.44 23.23 32.35 | 12.96 19.83 28.99 | | Omnivore | $\mathcal{E}$ | $\mathcal{T}_n$ | 34.65 47.91 60.63 | 33.06 44.77 58.36 | 28.59 38.38 51.9 | | | | $\mathcal{T}_e$ | 12.51 19.6 27.06 | 11.66 17.54 24.45 | 9.94 14.63 20.96 | | | $\mathcal{P}$ | $\mathcal{T}_n$ | 32.5 44.45 52.47 | 31.13 41.53 50.91 | 28.39 37.03 47.97 | | | | $\mathcal{T}_e$ | 21.28 31.51 40.93 | 20.12 28.81 39.6 | 18.08 24.96 36.77 | | | $\mathcal{R}$ | $\mathcal{T}_n$ | 30.22 42.43 52.11 | 28.94 39.47 50.49 | 25.15 32.65 46.51 | | | | $\mathcal{T}_e$ | 19.34 31.28 41.24 | 18.4 28.66 39.33 | 16.27 24.28 35.35 | 4.3 Procedure Learning **Description.** Procedure learning entails identifying relevant frames across videos of activity and estimating the sequential steps required to complete a given task. To benchmark procedure learning, we employed normal recordings from our dataset and assessed the performance of recently proposed methods (Bansal, Siddhant et al., 2022; Dwibedi et al., 2019). **Implementation Details.** We followed the setup as described in the work of (Bansal, Siddhant et al., 2022) and trained the embedder networks for each recipe. Specifically, we train two networks, one using the Cycleback Regression loss \( C \) proposed by (Dwibedi et al., 2019) and the other using a blend of two loss functions: Cycleback Regression loss \( C \) and Contrastive - Inverse Difference Moment loss \( \mathcal{C} \). The combined loss function is expressed as \( C + \lambda \times \mathcal{C} \), where \( \lambda \) is a hyperparameter. (we set it to 0.5). We note that we only train embedded networks using loss functions from these methods and retain the Pro-Cut Module for assigning frames to key steps. We adhered to the hyperparameter settings specified in the original paper to train the embedder network. Utilizing an A-40 GPU, the entire training process was completed in approximately three hours. The results are presented in Table 7; we noticed a significant decline in performance compared to the results from all other datasets reported in the paper (Bansal, Siddhant et al., 2022). Given that our dataset features videos with notably longer key step lengths (as indicated in Table 2), we attribute this drop in performance primarily to this distinguishing characteristic. Table 7: **Procedure Learning.** Here, \( P \) represents precision, \( R \) represents recall, and \( I \) represents IOU. | Recipe | Random | \( M_1 \) (Dwibedi et al., 2019) | \( M_2 \) (Bansal, Siddhant et al., 2022) | |-------------------------|-----------------|----------------------------------|------------------------------------------| | | \( P \) \( R \) \( I \) | \( P \) \( R \) \( I \) | \( P \) \( R \) \( I \) | | BlenderBananaPancakes | 7.40 3.83 2.26 | 12.65 9.50 5.16 | 15.54 9.96 5.72 | | Coffee | 6.54 3.87 2.17 | 13.68 9.91 5.49 | 15.76 10.25 5.63 | | MugCake | 5.45 4.00 2.12 | 16.12 12.95 6.87 | 10.32 8.85 4.40 | | PanFriedTofu | 5.35 3.97 1.54 | 8.86 10.39 3.75 | 9.34 12.44 3.87 | | Pinwheels | 6.54 4.28 2.13 | 13.58 11.96 5.92 | 16.08 13.06 7.05 | | Average of 24 recipes | 7.61 3.92 2.22 | 15.62 10.85 5.78 | 15.78 10.68 5.82 | 5 Discussion, Summary and Future Work In this paper, we have introduced a large egocentric dataset for procedural activities. Our dataset consists of synchronized egocentric views, audio, and depth information specifically designed for tasks such as Temporal Action Segmentation, 3D activity analysis, Procedure Learning, Error Recognition, Error Anticipation, and more. Additionally, we have provided benchmarks for error recognition and Procedure Learning. While current methods have yielded promising outcomes, they continue to struggle to tackle these challenges adequately with satisfactory results, as demonstrated by our experimental assessment. This indicates the need for further exploration in this domain. **Limitations.** We intend to capture deviations observed while performing a procedural activity from an egocentric view. First, we note that this type of data cannot be compiled from crowd-sourced platforms. This left us to capture participant data while performing procedural activities. Second, by the nature of the problem, errors that occur when performing procedural activities are combinatorial and can have a compounding effect. Thus, our work has the following limitations: (1) For each activity, the errors captured and presented in the dataset form a subset of the whole combinatorial space; (2) Capturing 4D data in real kitchen environments posed logistical and equipment training challenges. As a result, we were compelled to limit the data collection to a specific geographic area. (3) Compared to datasets curated from the crowd-sourced platforms used for tasks like action/activity recognition, temporal action segmentation, etc., the presented work comprises fewer recipes. Our work opens up several avenues for future work. First, an exciting direction is the extension of the dataset to include activities from other domains. By incorporating tasks such as performing chemical experiments or executing hardware-related activities (e.g., working with cars or computer parts), the dataset can encompass a wider range of activities and provide insights into error patterns in diverse real-world scenarios. Second, the dataset can be used to compare and develop methods for solving various tasks such as transfer learning, semantic role labelling, video question answering, long video understanding, procedure planning, improving task performance, reducing errors, etc. REFERENCES In the Eye of Beholder Joint Learning of Gaze and Actions in First Person Video - ECCV-2018_12260412, August 2018. URL https://openaccess.thecvf.com/content_ECCV_2018/papers/Yin_Li_In_the_Eye_ECCV_2018_paper.pdf [Online; accessed 7 Jun. 2023]. Bansal, Siddhant, Arora, Chetan, and Jawahar, C. V. My View is the Best View: Procedure Learning from Egocentric Videos. European Conference on Computer Vision, July 2022. doi: 10.48550/arxiv.2207.10883. Piotr Bojanowski, Rémi Lajugie, Francis Bach, Ivan Laptev, Jean Ponce, Cordelia Schmid, and Josef Sivic. Weakly supervised action labeling in videos under ordering constraints. In David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars (eds.), Computer Vision – ECCV 2014, pp. 628–643, Cham, 2014. Springer International Publishing. ISBN 978-3-319-10602-1. Chien-Yi Chang, De-An Huang, Yanan Sui, Li Fei-Fei, and Juan Carlos Niebles. D3TW: discriminative differentiable dynamic time warping for weakly supervised action alignment and segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pp. 3546–3555. Computer Vision Foundation / IEEE, 2019a. doi: 10.1109/CVPR.2019.00366. URL http://openaccess.thecvf.com/content_CVPR_2019/html/Chang_D3TW_Discriminative_Differentiable_Dynamic_Time_Warping_for_Weakly_Supervised_Action_CVPR_2019_paper.html. Chien-Yi Chang, De-An Huang, Danfei Xu, Ehsan Adeli, Li Fei-Fei, Juan Carlos Niebles, Juan Carlos Niebles, and Juan Carlos Niebles. procedure planning in instructional videos. European Conference on Computer Vision, 2019b. doi: 10.1007/978-3-030-58621-8_20. Mathilde P. Chevignard, Cathy Catroppa, Jane Galvin, and Vicki Anderson. Development and evaluation of an ecological task to assess executive functioning post childhood tbi: The children’s cooking task. Brain Impairment, 11(2):125–143, 2010. doi: 10.1375/brim.11.2.125. Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, William Price, Will Price, Will Price, and Michael Wray. The EPIC-KITCHENS Dataset: Collection, Challenges and Baselines. arXiv: Computer Vision and Pattern Recognition, April 2020. doi: 10.1109/Ipami.2020.2991965. ARXIV_ID: 2005.00343 MAG ID: 3022491006 S2ID: 1badccbe4a3cbf8662b924a97bbeea14fe2f1ac7. Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Antonino Furnari, Evangelos Kazakos, Jian Ma, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, and Michael Wray. Rescaling Egocentric Vision: Collection, Pipeline and Challenges for EPIC-KITCHENS-100. International Journal of Computer Vision, October 2021. doi: 10.1007/s11263-021-01531-2. Fernando De la Torre, Jessica K. Hodgins, Adam W. Bargteil, Xavier Martin, J. Robert Macey, Alex Tusell Collado, and Pep Beltran. Guide to the carnegie mellon university multimodal activity (cmu-mmac) database. In Tech. report CMU-RI-TR-08-22, Robotics Institute, Carnegie Mellon University, April 2008. Dibene, Juan C. and Dunn, Enrique. HoloLens 2 Sensor Streaming. Cornell University - arXiv, November 2022. doi: 10.48550/arxiv.2211.02648. ARXIV_ID: 2211.02648 MAG ID: 4308505718 S2ID: b19229b4f8667dae5017cae4df5c37086332da17. Bruce Draper. DARPA’s Perceptually-enabled Task Guidance (PTG) program, 2021. URL https://www.darpa.mil/program/perceptually-enabled-task-guidance. Dvornik, Nikita, Hadji, Isma, Pham, Hai, Bhatt, Dhaivat, Martinez, Brais, Fazly, Afsaneh, and Jepson, Allan D. Graph2Vid: Flow graph to Video Grounding for Weakly-supervised Multi-Step Localization. Cornell University - arXiv, October 2022. doi: 10.48550/arxiv.2210.04996. Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Pierre Sermanet, and Andrew Zisserman. Temporal cycle-consistency learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
TLADT8Wrhn
The fundamental issue of continual learning is catastrophic forgetting. If we fine-tune a small number of parameters (e.g., prompt tuning) in the CLIP model, is catastrophic forgetting a major concern? On the other hand, if we fine-tune a large number of parameters, resource limitations may become a factor. Therefore, from this perspective, is it necessary to construct such benchmarks?'
TiC-CLIP: Continual Training of CLIP Models Saurabh Garg†∗ Mehrdad Farajtabar† Hadi Pouransari† Raviteja Vemulapalli† Sachin Mehta† Oncel Tuzel† Vaishaal Shankar† Fartash Faghri† †Apple ‡Carnegie Mellon University sgarg2@andrew.cmu.edu, fartash@apple.com Abstract Keeping large foundation models up to date on latest data is inherently expensive. To avoid the prohibitive costs of constantly retraining, it is imperative to continually train these models. This problem is exacerbated by the lack of any large scale continual learning benchmarks or baselines. We introduce the first set of web-scale Time-Continual (TiC) benchmarks for training vision-language models: TiC-DataComp, TiC-YFCC, and TiC-RedCaps. TiC-DataComp, our largest dataset, contains over 12.7B timestamped image-text pairs spanning 9 years (2014–2022). We first use our benchmarks to curate various dynamic evaluations to measure temporal robustness of existing models. We show OpenAI’s CLIP (trained on data up to 2020) loses ≈ 8% zero-shot accuracy on our curated retrieval task from 2021–2022 compared with more recently trained models in OpenCLIP repository. We then study how to efficiently train models on time-continuous data. We demonstrate that a simple rehearsal-based approach that continues training from the last checkpoint and replays old data reduces compute by $2.5 \times$ when compared to the standard practice of retraining from scratch. 1 Introduction Large multimodal foundation models [Bommasani et al., 2021] have offered unprecedented advancements in image-generation and zero-shot generalization, and have led to a paradigm shift in multimodal learning, e.g., CLIP [Radford et al., 2021], Flamingo [Alayrac et al., 2022], and Stable Diffusion [Rombach et al., 2022]. These foundation models are typically trained on large web-scale datasets which are fixed and static in nature. For example, CLIP’s training data contains 400 million image-text pairs, and Stable Diffusion was trained on LAION-2B dataset [Schuhmann et al., 2022]. In reality, however, these models must operate in a dynamic environment, where the world is in a state of constant change. For instance, the internet continually evolves, with petabytes of new data being added daily [Wenzek et al., 2019; Wiener & Bronson, 2014]. It remains unclear how legacy models, e.g., OpenAI’s CLIP models which were trained on internet-scale data up until 2020, work on future data and whether they even require any re-training to adapt to time-evolving data. We begin by comparing robustness of OpenAI’s CLIP models to others in OpenCLIP repository that are trained on more recently curated web-datasets (e.g., LAION-5B, DataComp) containing data up until 2022 [Ilharco et al., 2021]. Since there is no existing benchmark to understand robustness to time-evolving vision-language data, we curate dynamic classification and retrieval tasks for years 2014–2022 and evaluate different CLIP models (see Sec. 2.2 for our evaluation tasks). We make an intriguing observation that OpenAI models exhibit a significant gap in retrieval performance on data from 2021–2022 compared with 2014–2016 whereas OpenCLIP models retain their performance. In contrast, standard evaluations such as accuracy on ImageNet distribution shifts paint an incomplete picture that OpenAI’s CLIP models are slightly more robust than OpenCLIP models (Fig. 1). Our findings not only demonstrate the critical need for models to adapt and evolve alongside dynamic data distributions, but also underscores the limitations of relying solely on static benchmarks (e.g. ImageNet). One naive but common practice for adapting to time-evolving data is to train a new CLIP model from scratch every time we obtain a new pool of image-text data. This practice has its rationale: ∗Work done during an internship at Apple. 1Code is available at https://github.com/apple/ml-tic-clip Figure 1: (Left, Middle) OpenAI models show less zero-shot robustness on retrieval task from 2021–2022. OpenCLIP models and OpenAI models have similar robustness on standard benchmarks. However, OpenAI models show less robustness on our retrieval task when compared with recent models in OpenCLIP repository, highlighting susceptibility to a time-evolving data distribution (Right). Simple continual training baseline is computationally efficient and competitive to retraining from scratch. Different points denote models trained sequentially on our TiC-DataComp (L) as data arrives over time. Warm start training with previous checkpoint and replaying all old data, performs similar to Oracle which trains from scratch every time new data arrives, by using $2.7 \times$ less compute. Initiating training from a pre-existing model can make it difficult to change the model’s behavior in light of new data (Ash & Adams, 2020; Achille et al., 2018; Liu et al., 2023). However, training foundation models from scratch demands significant computational resources and is often infeasible to repeat frequently. For example, ViT-g-14 in Schuhmann et al. (2022); Cherti et al. (2022) was trained for 240K A100 GPU hours which is approximately one month on 400 GPUs. The prevailing training guidelines centered around scaling laws for CLIP training have only looked at training from scratch (Cherti et al., 2023). This leads to a pivotal question: How can we continuously update models as the data distribution evolves over time given computational constraints? There exists a vast literature on continual learning, with a focus on adapting models to dynamic environments (Parisi et al., 2019; Hadsell et al., 2020; De Lange et al., 2021). Traditionally, this field concentrated on synthetic incremental benchmarks that lack natural evolution between tasks, and hence, continual learning methods are seldom used in real-world scenarios (Cossu et al., 2022; Lin et al., 2021). In contrast, recent works focusing on continual learning methods for CLIP models primarily target improving performance on a single or a sequence of disjoint downstream tasks (Ding et al., 2022; Zhou et al., 2023b; Zheng et al., 2023; Ilharco et al., 2022). While some recent works have started to address these problems, existing benchmarks are comparatively much smaller in scale, or lack paired image-text data (Ni et al., 2023; Lin et al., 2021). Simply put, there is a scarcity of work focusing on continual training of CLIP models on naturally evolving data with time at web-scale. We take the first step towards Time-Continual (TiC) training of CLIP models where data distribution evolves naturally over time (overview in Fig. 2). We introduce TiC-DataComp, a new benchmark for Time-Continual training of CLIP models, which we create by appending “crawl time” information to existing CommonPool dataset (Gadre et al., 2023). We also repurpose other web-scale datasets gathered from diverse sources, such as Reddit and Flickr. Specifically, we curate TiC-YFCC and TiC-RedCaps by leveraging time information available in YFCC (Thomee et al., 2016) and Redcaps (Desai et al., 2021) respectively. The primary objective of our study on this benchmark is to develop continual learning methods that operate within a constrained computational budget (say $C$) each time a fresh batch of data becomes available. These methods compete with an Oracle, which starts training from scratch every time new data arrives, utilizing a cumulative computational budget. To assess models trained in our TiC-CLIP framework, we evaluate models on our proposed dynamic evaluation tasks that evolve with time along with 28 standard classification and retrieval tasks including ImageNet (Krizhevsky et al., 2012), ImageNet distributions shifts, and Flickr (Plummer et al., 2015), in a zero-shot manner following the work of Gadre et al. (2023); Radford et al. (2021). Finally, we develop continual learning methods on our benchmarks and perform over two hundred experiments with different baselines that utilize previous checkpoints (e.g., warm start, patching, and distillation), replay buffers, and learning rate schedules. Our findings highlight a key takeaway: Cumulative method that warm starts training with the latest checkpoint and replays all old data, achieves performance competitive to an Oracle while being $2.7 \times$ computationally more efficient. Additionally, our experiments demonstrate interesting trade-offs between buffer sizes for static and dynamic performance and provide valuable insights into learning rate schedules for sequential training. Our results span over various dataset scales (from 11M samples to 3B) and highlight trends with different methods that are largely consistent across scales. To make our benchmarks accessible, we publicly release the code and the time information we collect on top of existing datasets here. Our work is just an initial step towards continual training of foundation models, and we believe our research would spur more attention to this understudied area. 2 TiC-CLIP: Benchmarks and Experimental Protocol In this section, we introduce our benchmark (Fig. 2) focusing on the training of a vision-language foundation model with the Contrastive Language Image Pretraining (CLIP) (Radford et al., 2021) objective. Notably, we train on image-text data that arrives sequentially unlike the conventional image-text datasets which are static (e.g. WiT in CLIP, DataComp in Gadre et al., 2023). We curate TiC-DataComp, TiC-YFCC, and TiC-RedCaps that are image-text pairs sourced from the internet which we augment with auxiliary time information. We also introduce dynamic evaluation tasks to assess performance of our continually trained models on data evolving with time. The goal of a learner is to train a deployable model at each step as new data becomes available with a fixed compute budget. 2.1 Benchmark Design: How we Create Time-Continual Datasets? To instantiate continual training of CLIP, we extend existing image-text datasets with time information collected from the original source of the datasets. Our largest dataset is TiC-DataComp which contains 12.7 billion image-text pairs with “crawl-time” metadata. We create this dataset on top of the existing DataComp benchmark (Gadre et al., 2023). We also create TiC-YFCC and TiC-RedCaps on top of existing YFCC15M (Thomee et al., 2016; Radford et al., 2021) and Redcaps (Desai et al., 2021) datasets to highlight that our findings are broadly applicable to carefully curated datasets from diverse sources such as Reddit and Flickr. While time-related metadata is absent in the DataComp benchmark, it is available in the original releases of YFCC and Redcaps. Nevertheless, to the best of our knowledge, no prior work utilizes such time information for continual training of CLIP models. We show dataset statistics for all datasets, e.g., number of examples in each year in App. C.3. TiC-DataComp We collect timestamps for the CommonPool dataset introduced in DataComp which contains 12.7B image-text pairs (not including 0.1B inaccessible ones). This dataset stands as the largest public image-text dataset to date. The source of DataComp is Common Crawl, which periodically releases web-crawled data snapshots, typically on a monthly basis since 2014 with new and updated webpages. To construct TiC-DataComp, we augment each image-text pair in DataComp with their first timestamp. We followed the same construction process as DataComp but retained only the image-text pair found in the earliest snapshot during the deduplication stage. This process provides timestamps at the granularity of months, spanning years 2014–2022. See App. C.7 for details on the construction process. We note that while this augmented time information may contain some noise, on average, we find it to be a reasonably accurate proxy for the upload time of web pages (see App. C.7). Although our benchmark contains time information at the granularity of months, we limit our experiments to granularity of years by consolidating data for all months in a year. Similar to DataComp, our benchmark has an inclusive design, accommodating participants with varying levels of computational resources. In particular, we experiment with medium, large, and xlarge sizes from CommonPool. Cadre et al. (2023) leverage different filtering strategies to select the training subset. We are concerned that filtering techniques bias the selected training data. In App C.1, we provide preliminary evidence that “Bestpool” filtering that uses off-the-shelf CLIP models, indeed biases the selected data to old time steps. Nevertheless, to highlight significance of our findings even for state-of-the-art filtering techniques, we experiment with both Bestpool and Basic filtering (no CLIP filtering) at xlarge scale. For large and medium scales, we only experiment with Basic filtering. **TiC-YFCC** We experiment with the 15M subset of YFCC100M (Thomee et al., 2016), namely YFCC15M, selected by OpenAI (Radford et al., 2021). This filtering retains only images with natural text in captions. YFCC100M contains data from years 2008–2014 and was originally released with upload timestamps. We use this information to create continual splits at the granularity of years. **TiC-RedCaps** RedCaps contains 12M image-caption pairs from manually curated set of subreddits across 2011–2020 (Desai et al., 2021). We use the creation timestamps of the posts to create splits for continual learning. Similar to the other two datasets, we experiment at the granularity of years. ### 2.2 Evaluation Testbed **Dynamic tasks** We leverage the temporal information in our benchmarks to create dynamic evaluation tasks. Here, the test data comprises samples varying over years as the world evolved. For our largest dataset which is TiC-DataComp, we create dynamic tasks for both retrieval and classification as described below. (examples in Figure 3 and additional examples in App C.5): I. **Dynamic retrieval task**: To create a retrieval task, we sample a batch of IID image-text pairs from different timestamps and evaluate text retrieval performance given the corresponding image (similarly, image retrieval given the corresponding text). We refer to the dataset as TiC-DataComp-Retrieval. II. **Dynamic classification task**: We also create a classification dataset TiC-DataComp-Net with ImageNet classes from CommonPool and augmented with timestamps. Inspired by LAIONNet (Shirali & Hardt, 2023), we first filter examples where the corresponding caption contains one and only one of the synsets of ImageNet. Then we only retain examples where the similarity between ImageNet synset definition and the caption exceeds a threshold of 0.5. We evaluate the similarity using an off-the-shelf sentence embedding model (Reimers & Gurevych, 2019). Crucially, unlike LAIONNet, we do not filter the image-text pairs with CLIP similarity scores to avoid biasing the selection process. We describe the construction process in more details in App C.5. On TiC-DataComp-Net, we report average accuracy over all classes and over selected nodes (e.g., motor vehicles) at each time step. Similarly, we create retrieval tasks for TiC-YFCC and TiC-RedCaps. Note that we remove the extracted image-text pairs for dynamic retrieval and classification tasks from the training sets. Evaluations on dynamic tasks are done in a zero shot manner. Static tasks We also evaluate models on numerous classification and retrieval tasks in a zero-shot manner as in Radford et al. (2021). In particular, we consider 28 standard tasks: 27 image classification tasks, e.g., ImageNet and its 6 distribution shifts (e.g., ImageNetv2, ImageNet-R, ImageNet-Sketch, and Objectnet), datasets from VTAB and Flickr30k retrieval task. We refer to these as static evaluation tasks. We list all the datasets in App. C.2. Evaluation metrics We define metrics for classification tasks and retrieval tasks based on accuracy and Recall@1, respectively. Let $T$ represent the number of time steps for which we have data. For each training method, we generate a total of $T$ models, each corresponding to the end of training at a particular time step. For static datasets (e.g., ImageNet), we report average performance of $T$ models. However, when dealing with dynamic evaluation datasets, we assess the performance of each of the $T$ models on evaluation datasets collected at all time steps. Consequently, for each model and a dynamic evaluation task, we obtain $T$ performance values. We represent these values using the performance matrix $\mathcal{E}$, where each entry $\mathcal{E}_{i,j}$ signifies the performance of the model obtained after observing training data at time step $i$ when evaluated on a dataset from time step $j$. The performance matrix $\mathcal{E}$ can also be succinctly summarized using three standard metrics commonly employed in continual learning evaluations (Lin et al., 2021; Díaz-Rodríguez et al., 2018): - **In-domain performance**: average performance at each training time step (i.e., the diagonal of $\mathcal{E}$) - **Backward transfer**: average on time steps before each training step (i.e., the lower triangular of $\mathcal{E}$) - **Forward transfer**: average on time steps following each training step (i.e., the upper triangular of $\mathcal{E}$) Sometimes, the metrics described above can cause the backward transfer metric to be influenced by later evaluation time steps, biasing the backward transfer metric (refer to App. F for details). Therefore, in App. F, we present results using revised metrics that mitigate this issue. While the static tasks capture performance on standard benchmarks, dynamic tasks capture problems due to distribution shift (for forward transfer) and forgetting (for backward transfer). The goal in our benchmark is to develop continual learning methods that maximize performance on static tasks while simultaneously optimizing for performance on dynamic tasks. 2.3 Experimental Protocol For Training Streaming protocol We follow a streaming protocol, where data is progressively revealed to the learner in large batches with the objective of achieving a deployable model as early as possible after each batch arrives. We conduct experiments with data streaming at the granularity of years and our benchmark supports future research at the granularity of months. Additionally, as the amount of data from earlier time steps is limited (see App. C.3), we aggregate data from the earlier time steps into a single larger batch and timestamp it by the latest year in the range. After this aggregation, we have 7 time steps for TiC-DataComp (2016–2022) and 4 for both TiC-YFCC (2011–2014) and TiC-RedCaps (2017–2020). While the number of image-text pairs revealed at each time step are of similar orders of magnitude, the exact number does vary across steps and we do not artificially alter the sizes. Memory budget We allow methods to use the last model checkpoint at each step as the cost of keeping one checkpoint per month is often negligible. In contrast, the cost of retaining old data can be high and might not be permitted due to data expiration policies. Thus, along with studying methods that retain all old data, we also explore strategies that restrict data persistence (see Sec. 3 for details). Compute budget To ensure a fair comparison between methods, we establish a consistent total compute budget, quantified in terms of Multiply-Accumulate Operations (MACs), and allocate it evenly for training at every time step. Unless specified otherwise, for all methods except Oracle and LwF, we use the same compute budget. For experiments on TiC-DataComp, we refer to compute configurations in DATACOMP for overall compute. For TiC-RedCaps and TiC-YFCC, we use compute of order medium scale in TiC-DataComp. Compute budget details are in App. C.4. 2.4 Analyzing Distribution Shifts in the Constructed Benchmarks TiC-DataComp analysis through the lens of constructed evaluation tasks First, we qualitatively analyze the examples in our retrieval and classification dataset (Fig. 3). We observe that over time, in the retrieval task, new concepts like COVID-19 emerge. Likewise, certain ImageNet classes evolve, such as the shift from “masquerad” masks to “surgical/protective” masks in their definitions. Moreover, as time evolves, we observe that image quality improves and more images tend to appear in the wild in contrast to centered white background images. Next, we compare performance of OpenAI and OpenCLIP models on our datasets. Here, we only present the main findings, and delegate a detailed discussion to App. C.6. We observe a significant performance gap between OpenAI and OpenCLIP models on our dynamic retrieval task (Fig. 1). This gap widens notably on retrieval queries where captions mention COVID-19. On the other hand, OpenAI and OpenCLIP models exhibit similar robustness for retrieval on data coming from Flickr highlighting that data from some domains do not exhibit shifts that cause performance drops. For our classification task, we observe a very small drop (∼ 1%) when averaged across all categories. However, we observe a substantial gap on specific subtrees in ImageNet. For example, classes in “motor vehicle” subtree show an approximate 4% performance drop, when comparing OpenAI and OpenCLIP models. These findings highlight that while overall ImageNet classes may remain timeless, certain categories tend to evolve faster than others. Our qualitative and quantitative analysis on TiC-DataComp clearly highlights evolution of distributions and captures different properties than standard benchmarks. Quantitative analysis on TiC-YFCC We analyze TiC-YFCC using off-the-shelf sentence and image encoders. We first embed images from different time steps with an OpenAI CLIP encoder and then compute Frechet Inception Distance (FID; Seitzer (2020)). As time progresses, we observe that FID distance increases with respect to data from first time step (Fig. 18 in App. C.6). Similarly, we use pretrained sentence transformer to extract top-5 categories from Wordnet Nouns for each caption. We observe that the TV distance over distribution of WordNet Nouns evolves over time when compared to data from the first time step. More details in App. C.6. 3 TiC-CLIP: How to Continually Train CLIP Models? In this section, we lay out different methods specifically focus on the following questions (Tab. 1): (i) How to utilize/replay data from previous time steps; (ii) How to leverage previously trained model checkpoints? (iii) What should be the training/optimization procedure? Data replay methods initialized from the last checkpoint demonstrate strong performance on standard continual learning benchmarks (Sec. 5). We consider replay methods with/without initialization from last checkpoint(s): I. Oracle: Train a CLIP model from scratch (i.e., random initialization) on all image-text data received till time $t$ using a large compute budget of $t \times C$. Oracle represents a prohibitively expensive method that is the most common practice in training large-scale foundation models. The goal of other methods is to perform as close as possible to the Oracle within their limited budget. II. Cumulative: Train each model initialized from last checkpoint on the union of all data up to $t$ with compute budget $C$. This method is analogous to experience replay (Robins (1995), Hayes et al. (2019)) but with substantially larger buffers than common in the continual learning literature. Given a fixed buffer size for each past step, we observe minimal to no difference between random subsampling and other strategies. After sampling the replay data, we randomly shuffle it together with new data for training. We consider the following strategies for sampling buffer sizes per step: • -All: Replay all previous data. • -Exp: Replay a buffer of size $D$ and reduce the amount of old data by half at each step. For example, at 3-rd time step, we retain $D/2$, $D/2$ of old data and at 4-th, we retain $D/4$, $D/4$, $D/2$ of old data. Along with $D$ data from current step, this method trains on at most $2D$ data in each step. • -Equal: Replay a buffer of size $D$ but split the buffer equally among all previous years. For example, at 4-th step, we retain $D/3$, $D/3$, $D/3$ of old data. Along with $D$ data from current time step, this method trains on at most $2D$ data in each step. III. Sequential: Train only on the new data starting from the best checkpoint of the previous time step. Sequential is similar to Cumulative but without any replay buffer. IV. Restart: Train each model from scratch (i.e., random initialization) on all the data till time $t$ for compute budget $C$. Restart is similar to the Oracle but with compute budget $C$ at each time step and similar to Sequential but with random initialization. As such, Restart helps us understand the forward transfer and loss of plasticity in our benchmark (Ash & Adams (2020); Dohare et al. (2023)). Table 1: Table summarizing our methods. $D$: data size in each step, $T$ total time steps, $t$: current time step, $C$: compute budget (iterations). | Method | Each Step | Total | |-----------------|-----------|-------| | Cumulative-All | $tD$ | Last | $C$ | $TC$ | | Cumulative-Exp | $2D$ | Last | $C$ | $TC$ | | Cumulative-Equal| $2D$ | Last | $C$ | $TC$ | | Sequential | $D$ | Last | $C$ | $TC$ | | Restart | $tD$ | Rand | $C$ | $TC$ | | Patching | $D$ | Last Patch | $C$ | $TC$ | | LwF | $D$ | Last | $1.2 \times C$ | $1.2 \times TC$ | | Oracle** | $tD$ | Rand | $tC$ | $\frac{TC}{t}$ | Published as a conference paper at ICLR 2024 Table 2: Zero shot performance on our time-continual benchmarks. * and ** denote methods that violate the compute budget. For static tasks, we tabulate accuracy of the models obtained on the final timestamp. For dynamic tasks, we tabulate forward/backward transfer and ID performance on retrieval tasks (Sec. 2.3). For TiC-DataComp (XL), we include results with Bestpool filtering (basic filtering in Table 5). For all metrics, higher is better. | Benchmark | Method | Compute (MACs) | Static Tasks | Dynamic Retrieval Tasks | |--------------------|-------------------------|----------------|--------------|-------------------------| | | | | ImageNet | Backward | | | | | dist. shift | Transfer | | TiC-YFCC | Restart | $3.4 \times 10^8$ | 5.2 | 3.6 | 3.0 | 12.9 | 18.2 | 41.7 | 18.6 | | | Sequential | $3.4 \times 10^8$ | 17.3 | 10.7 | 15.9 | 21.9 | 42.2 | 48.4 | 23.7 | | | Patching | $3.4 \times 10^8$ | 18.9 | 11.3 | 18.5 | 23.3 | 44.7 | 53.4 | 24.5 | | | Cumulative-Exp | $3.4 \times 10^8$ | 24.1 | 14.3 | 20.4 | 25.9 | 60.4 | 60.1 | 27.1 | | | Cumulative-Equal | $3.4 \times 10^8$ | 23.9 | 13.8 | 20.5 | 26.3 | 60.4 | 60.1 | 27.1 | | | Cumulative-All* | $3.4 \times 10^8$ | 29.3 | 17.6 | 26.8 | 29.6 | 66.4 | 60.2 | 27.6 | | | LwF* | $4.1 \times 10^8$ | 19.3 | 9.1 | 14.7 | 21.6 | 36.7 | 56.0 | 22.2 | | | Cumulative-All** | $3.6 \times 10^8$ | 29.2 | 17.5 | 27.4 | 29.3 | 66.8 | 60.3 | 27.6 | | | Oracle** | $8.5 \times 10^8$ | 29.2 | 17.0 | 25.9 | 29.0 | 66.1 | 61.8 | 26.9 | | TiC-RedCaps | Restart | $3.4 \times 10^8$ | 11.7 | 8.5 | 3.7 | 18.4 | 21.3 | 25.4 | 22.4 | | | Sequential | $3.4 \times 10^8$ | 19.3 | 13.7 | 6.2 | 25.8 | 33.0 | 33.6 | 27.5 | | | Patching | $3.4 \times 10^8$ | 21.3 | 15.2 | 7.7 | 26.8 | 34.8 | 34.8 | 27.8 | | | Cumulative-Exp | $3.4 \times 10^8$ | 27.5 | 19.1 | 10.5 | 30.0 | 44.5 | 42.0 | 32.6 | | | Cumulative-Equal | $3.4 \times 10^8$ | 27.8 | 19.4 | 10.0 | 30.5 | 44.4 | 42.0 | 32.6 | | | Cumulative-All* | $3.4 \times 10^8$ | 31.2 | 16.7 | 14.5 | 31.7 | 48.9 | 49.2 | 38.4 | | | LwF* | $4.1 \times 10^8$ | 21.6 | 14.8 | 8.2 | 27.3 | 35.4 | 36.0 | 28.4 | | | Cumulative-All** | $3.6 \times 10^8$ | 32.9 | 23.7 | 14.1 | 32.9 | 49.0 | 43.4 | 33.4 | | | Oracle** | $8.5 \times 10^8$ | 32.7 | 22.7 | 14.3 | 32.3 | 48.5 | 43.1 | 33.4 | | TiC-DataComp (M) | Sequential | $3.0 \times 10^8$ | 19.2 | 16.4 | 16.4 | 26.0 | 25.7 | 26.4 | 14.9 | | | Patching | $3.0 \times 10^8$ | 19.3 | 16.8 | 18.5 | 26.1 | 26.9 | 25.1 | 14.5 | | | Cumulative-Exp | $3.0 \times 10^8$ | 22.1 | 18.4 | 20.4 | 28.8 | 31.7 | 27.1 | 15.2 | | | Cumulative-Equal | $3.0 \times 10^8$ | 22.1 | 18.4 | 19.2 | 28.0 | 31.8 | 26.8 | 15.1 | | | Cumulative-All* | $3.0 \times 10^8$ | 24.0 | 20.2 | 20.0 | 30.0 | 33.8 | 28.1 | 15.1 | | | LwF* | $3.8 \times 10^8$ | 19.2 | 16.5 | 17.7 | 27.0 | 25.6 | 26.6 | 14.9 | | | Cumulative-All** | $3.9 \times 10^8$ | 30.0 | 25.0 | 28.6 | 35.1 | 36.7 | 28.3 | 15.5 | | | Oracle** | $1.2 \times 10^9$ | 25.5 | 21.2 | 23.3 | 30.8 | 34.9 | 27.8 | 15.6 | | TiC-DataComp (L) | Sequential | $2.7 \times 10^8$ | 44.7 | 37.4 | 48.4 | 45.7 | 52.6 | 58.4 | 41.1 | | | Patching | $2.7 \times 10^8$ | 45.8 | 38.9 | 49.7 | 46.9 | 55.2 | 57.5 | 40.9 | | | Cumulative-Exp | $2.7 \times 10^8$ | 47.3 | 39.6 | 50.8 | 47.6 | 60.4 | 58.4 | 41.4 | | | Cumulative-Equal | $2.7 \times 10^8$ | 47.7 | 40.3 | 51.8 | 47.7 | 60.9 | 58.2 | 41.4 | | | Cumulative-All* | $2.7 \times 10^8$ | 50.9 | 41.3 | 50.9 | 48.6 | 62.1 | 57.3 | 41.2 | | | Cumulative-All** | $4.1 \times 10^8$ | 53.0 | 44.3 | 54.4 | 51.3 | 63.0 | 57.8 | 41.2 | | | Oracle** | $1.1 \times 10^9$ | 53.6 | 44.0 | 53.9 | 50.4 | 64.3 | 58.6 | 41.8 | | TiC-DataComp (XL) | Sequential | $2.7 \times 10^9$ | 66.5 | 51.2 | 61.2 | 61.0 | 63.1 | 68.9 | 56.8 | | | Cumulative-All | $2.7 \times 10^9$ | 71.6 | 58.8 | 65.1 | 64.8 | 70.7 | 68.5 | 57.1 | | | Cumulative-All* | $3.5 \times 10^9$ | 72.8 | 60.4 | 66.5 | 66.7 | 71.0 | 68.6 | 57.1 | | | Oracle** | $1.1 \times 10^{10}$ | 73.3 | 61.3 | 68.0 | 65.8 | - | - | - | V. Patching: We use sequential patching from Ilharco et al. (2022). Initialize from a patched model of last step and train only on the new data. To obtain a patched model at each time step, we apply weight interpolation with the patched model (if any) trained at time step $t - 1$ and the model trained at time step $t$. We tune the mixing coefficients by optimizing average retrieval performance on previous tasks. VI. LwF: Train only on the new data with a KL divergence penalty between the image-text similarity matrix of last checkpoint and current model on each batch (Li & Hoiem [2017]; Ding et al. [2022]). See App. E for results with other continual learning methods, e.g., EWC (Kirkpatrick et al. [2017]). Learning rate schedule The defacto Learning Rate (LR) schedule for training CLIP models is an initial linear increase to a maximum value, i.e., warm up, followed by a cosine decay (Radford et al. [2021]; Gadre et al. [2023]). We default to using a cosine LR schedule for each sequential run, resulting in a cyclic schedule and observe a significant increase in training loss early in subsequent runs when the LR is high. However, as training progresses, we observe that the increased loss decreases at a faster rate (when compared to training from scratch) allowing us to train with cyclic schedules. We discuss this more and explore an alternate learning rate schedule in App. B.3. Other Training details and hyperparameters Unless specified otherwise, we closely follow the original CLIP training recipe (Radford et al. [2021]). We train the CLIP variant with ViT-B/16 as the image encoder (Dosovitskiy et al. [2020]). All training and hyperparameters can be found in App. D.2. 4 EXPERIMENTS AND MAIN RESULTS Our main results are in Table 2 and more detailed plots on each dataset are in App. B.1. Recall, our goal is compete with an Oracle that re-trains from scratch every time new data is observed, both on dynamic and static tasks, while being computationally efficient. Here, we summarize our key findings: Figure 4: (Left) Dynamic and static evaluations rank models differently. Models with similar performance on static datasets, have > 6% difference on retrieval task from 2021-2022 TiC-DataComp (L). Different points denote models trained sequentially over time. (Right) Performance of Oracle on future time steps drops highlighting distribution shift in dataset. Each row evaluates the Oracle trained on TiC-DataComp (L) at a particular time step across all dynamic retrieval tasks. Cumulative-All saves up to $4 \times$ the cost. On dynamic evaluation tasks, we observe that Cumulative-All where we replay all the past data, achieves performance close to the Oracle (within 1%) using significantly less compute ($4 \times$ less on TiC-DataComp and $2.5 \times$ less on TiC-YFCC and TiC-RedCaps). On static tasks, the gap remains small at small scales but grows to 4.7% on large, 1.8% on xlarge Bestpool, and 4% on xlarge Basic (see Table 2 and Table 5). In these cases, training Cumulative models with slightly extra compute bridges the gap while remaining at least $2.7 \times$ more computationally efficient (see rows with * in Table 2). This highlights that with unconstrained access to past data, we can simply train sequentially and save significant computational resources. At scale, Sequential has strong forward transfer but lacks on static tasks. On TiC-YFCC and TiC-RedCaps, which are at the smallest scale, we observe a significant gap (> 10%) between Sequential (with no data replay) and Oracle on all tasks. On the other hand, on all scales in TiC-DataComp, Sequential shows strong performance on forward transfer and ID dynamic evaluations. However, on static tasks and backward transfer evaluations, Sequential significantly underperforms the Oracle. Patching and LwF improve over Sequential but lag behind Cumulative-All. On static tasks, LwF improves over Sequential by 2%, while on dynamic tasks, LwF improves backward transfer by 7% on TiC-DataComp (M). However, its computation cost is higher than even Cumulative-All* which outperforms LwF on all tasks. Patching improves over Sequential on backward transfer on all datasets (e.g., 5% boost on TiC-DataComp L) highlighting that Patching combines benefits of previously patched model and the new Sequential model without additional computation cost. However, such benefits do not show up on static tasks. These results hint that to continuously improve on static tasks with time, replaying old data as in Cumulative-All plays a crucial role. -Exp and -Equal significantly reduce replay buffer size and maintain static task performance and backward transfer. Recall, that -Exp and -Equal reduce the replay buffer size to a maximum $2D$ of old data. In particular, at the last time step, -Exp and -Equal reduce the buffer size by $3.5 \times$ for TiC-DataComp datasets. While reducing the buffer sizes, these methods still achieve performance close to Cumulative-All (within 2%) on both static and dynamic tasks, with -Equal consistently better than -Exp strategy. As we go to large scale, e.g., from medium to large, the gap between these methods and Cumulative-All reduces. These findings demonstrate that even a small amount of replay data from old time steps stays competitive with replaying all data and significantly improves over no replay at all. Warm up helps training on data from first time step, but hurts on subsequent time steps. Cosine LR is commonly coupled with an initial warm-up that linearly increases the LR from zero to maximum LR. We investigate the effectiveness of warm-up in first versus subsequent time steps. Surprisingly, we observe that not using warmup for subsequent training runs is strictly more beneficial than using warm up on both static and dynamic tasks. In particular, on TiC-DataComp (L), we observe about 1.5% improvement in ImageNet accuracy and 4.3% improvement on ID dynamic retrieval when not using warmup with Cumulative (see App. B.3). Moreover, we also ablate over not using warm up for the first training run and observe a drop of approximately 4.8% accuracy in the first time step on TiC-DataComp (L). Hence, we default to using warmup when training on the first time step and not using it on the subsequent time steps with all methods except for training on TiC-DataComp (XL) where we add a smaller warm up (10% of the warm up iterations used in first step) to stabilize training. Same maximum LR works best across all runs when using cosine schedule. We ablate on TiC-DataComp (M) to investigate how to change LR after training on data from the first time step. Unlike conventional pretraining and finetuning settings where LR is typically decreased for subsequent training, we observe that decaying maximum LR for subsequent steps in our setup hurts on static and dynamic tasks and consequently, we use same maximum LR across our runs (see App. B.3). Filtering strategy changes the ordering of performance on static and dynamic retrieval tasks. We observe that while bestpool filtering models outperform basic filtering models on TiC-DataComp (XL) by 6% on static tasks, they underperform by over 5% on dynamic retrieval task (see Fig. 7). Dynamic tasks provide complimentary information for model selection compared to static tasks. Choosing models solely based on static task performance may inadvertently select models that underperform on dynamic tasks. For example, Cumulative models that show relatively modest improvements on static tasks continue to improve by > 6% for retrieval on 2021-2022 (Fig. 4). Cumulative-All remains competitive to Oracle even on ImageNet on up to 8 splits. CLIP models are often trained for fewer epochs and are typically not trained until they reach an “overfitting” regime. Here, we investigate how Cumulative-All performs when compared to Oracle when training is done for longer. Specifically, we assess Cumulative-All on 2, 4 and 8 IID splits including the full dataset (see App. D.1 for details). Table 3 summarizes our key findings. Notably, even with up to 8 splits, the difference in accuracy between Oracle and Cumulative-All remains below 0.9%. These results underscore the feasibility of continual training with Cumulative-All even on ImageNet. ### Table 3: ImageNet continual training. Cumulative-All remains close to Oracle. | Method | Number of splits | |----------------|------------------| | | 1 (Oracle) | 2 | 4 | 8 | | Cumulative-All | 80.9 | 80.8 | 80.6 | 80.0 | 5 RELATED WORK Benchmarks for continual learning Traditionally, the continual learning community has focused on domain, class, and task incremental benchmarks (Hsu et al., 2018; Van de Ven & Tolias, 2019; Zhou et al., 2023a), with artificial task boundaries (e.g., Split-CIFAR, Perm-MNIST). These benchmarks are often task-specific and present minimal or no meaningful evolution between adjacent tasks. Consequently, continual learning methods are often confined to these benchmarks and seldom scale to practical real-world scenarios (Cossu et al., 2022; Lin et al., 2021). On the other hand, continual learning methods for CLIP models are primarily aimed at fine-tuning to improve performance on a single or on a sequence of disjoint downstream tasks (Thengane et al., 2022; Zheng et al., 2023; Illharco et al., 2022). Existing large-scale benchmarks for training CLIP models, e.g., Datacomp (Gadre et al., 2023) and LAION-5B (Schuhmann et al., 2022), are curated to investigate methods and scaling laws to train state-of-the-art CLIP models in a single training run. In our work, we augment these existing datasets with temporal information to create benchmarks for continual pertaining of CLIP models. Continual learning methods Common methods can be categorized into three categories: i) regularization, ii) replay, and iii) architecture-based methods. Regularization methods add a penalty to keep the fine-tuned model close to its initialization and often incur additional memory/compute costs (Kirkpatrick et al., 2017; Mirzadeh et al., 2020a,b; Farajtabar et al., 2020). Data replay methods retain all or a subset of the prior data for subsequent training (Lopez-Paz & Ranzato, 2017; Rebuffi et al., 2017; Chaudry et al., 2018). Simple replay-based baselines surpass various methods on standard benchmarks (Lomonaco et al., 2022; Balaji et al., 2020; Prabhu et al., 2020). Lastly, architecture-based methods expand the model as new tasks arrive, limiting their applicability in evolving environments without clear task boundaries (Schwarz et al., 2018; Rusu et al., 2016). In this work, we compare popular continual learning methods with simple alternatives for continually pretraining of CLIP. 6 CONCLUSION AND FUTURE WORK We view TiC-DataComp as the initial stride toward the continual training of large-scale vision-language foundation models. We believe that our benchmark, alongside the preliminary results obtained using simple baselines will foster future research for large-scale continual-learning. There are several pivotal directions for future work: (i) Compare our baselines on continually streaming data at finer granularity, e.g., streaming data at the monthly level; (ii) Investigate alternate learning rate schedules (e.g., Const-Cosine as in App. B.5) that are forward looking, and are better suited to continual learning; (iii) Better data filtering techniques that are more inclusive of future data; (iv) Expand our problem setup to encompass the training of other large-scale foundation models. REFERENCES Alessandro Achille, Matteo Rovere, and Stefano Soatto. Critical learning periods in deep networks. In *International Conference on Learning Representations*, 2018. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. *Advances in Neural Information Processing Systems*, 35:23716–23736, 2022. Jordan Ash and Ryan P Adams. On warm-starting neural network training. *Advances in neural information processing systems*, 33:3884–3894, 2020. Yogesh Balaji, Mehrdad Farajtabar, Dong Yin, Alex Mott, and Ang Li. The effectiveness of memory replay in large scale continual learning. *arXiv preprint arXiv:2010.02418*, 2020. Peter Bandi, Oscar Geessink, Quirine Manson, Marcory Van Dijk, Maschenka Balkenhol, Meyke HermSEN, Babak Ehteshami Bejnordi, Byungjae Lee, Kyunghyun Paeng, Aoxiao Zhong, et al. From detection of individual metastases to classification of lymph node status at the patient level: the camelyon17 challenge. *IEEE Transactions on Medical Imaging*, 2018. [https://pubmed.ncbi.nlm.nih.gov/30716025/](https://pubmed.ncbi.nlm.nih.gov/30716025/) Andrei Barbu, David Mayo, Julian Alverio, William Luo, Christopher Wang, Dan Gutfreund, Josh Tenenbaum, and Boris Katz. Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems (NeurIPS)*, volume 32. Curran Associates, Inc., 2019, [https://proceedings.neurips.cc/paper/2019/file/97af07a14cacba681feacf3012730892-Paper.pdf](https://proceedings.neurips.cc/paper/2019/file/97af07a14cacba681feacf3012730892-Paper.pdf). Sara Beery, Elijah Cole, and Arvi Gjoka. The iwildcam 2020 competition dataset, 2020. [https://arxiv.org/abs/2004.10340](https://arxiv.org/abs/2004.10340). Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. *arXiv preprint arXiv:2108.07258*, 2021. Jorg Bornschein, Alexandre Galashov, Ross Hemsley, Amal Rannen-Triki, Yutian Chen, Arslan Chaudhry, Xu Owen He, Arthur Douillard, Massimo Caccia, Qixuang Feng, et al. Nevis’22: A stream of 100 tasks sampled from 30 years of computer vision research. *arXiv preprint arXiv:2211.11747*, 2022. Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101–mining discriminative components with random forests. In *European Conference on Computer Vision (ECCV)*, 2014. [https://link.springer.com/chapter/10.1007/978-3-319-10599-4_29](https://link.springer.com/chapter/10.1007/978-3-319-10599-4_29). Zhipeng Cai, Ozan Sener, and Vladlen Koltun. Online continual learning with natural distribution shifts: An empirical study with visual data. In *Proceedings of the IEEE/CVF international conference on computer vision*, pp. 8281–8290, 2021. Fabio Cermelli, Massimiliano Mancini, Samuel Rota Bulo, Elisa Ricci, and Barbara Caputo. Modeling the background for incremental learning in semantic segmentation. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 9233–9242, 2020. Arslan Chaudhry, Marc’Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Efficient lifelong learning with a-gem. *arXiv preprint arXiv:1812.00420*, 2018. Arslan Chaudhry, Marcus Rohrbach, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, Puneet K Dokania, Philip HS Torr, and Marc’Aurelio Ranzato. On tiny episodic memories in continual learning. *arXiv preprint arXiv:1902.10486*, 2019. Gong Cheng, Junwei Han, and Xiaoqiang Lu. Remote sensing image scene classification: Benchmark and state of the art. *Proceedings of the Institute of Electrical and Electronics Engineers (IEEE)*, 2017. [https://ieeexplore.ieee.org/abstract/document/7891544](https://ieeexplore.ieee.org/abstract/document/7891544).
KPmajBxEaF
Does the proposed method work well with different frame numbers during inference time? As the proposed 2d-3d mapping needs aggregating information from all frames, does it consume huge memory in dense views (e.g., more than 100 frames)?
LEAP: Liberate Sparse-View 3D Modeling from Camera Poses Hanwen Jiang Zhenyu Jiang Yue Zhao Qixing Huang Department of Computer Sciences, University of Texas at Austin Project page: https://hwjiang1510.github.io/LEAP/ Figure 1: LEAP performs 3D modeling from sparse views without camera pose information. We show the capability of LEAP on real-world cases with three unposed image inputs. We show one of the inputs. ABSTRACT Are camera poses necessary for multi-view 3D modeling? Existing approaches predominantly assume access to accurate camera poses. While this assumption might hold for dense views, accurately estimating camera poses for sparse views is often elusive. Our analysis reveals that noisy estimated poses lead to degraded performance for existing sparse-view 3D modeling methods. To address this issue, we present LEAP, a novel pose-free approach, therefore challenging the prevailing notion that camera poses are indispensable. LEAP discards pose-based operations and learns geometric knowledge from data. LEAP is equipped with a neural volume, which is shared across scenes and is parameterized to encode geometry and texture priors. For each incoming scene, we update the neural volume by aggregating 2D image features in a feature-similarity-driven manner. The updated neural volume is decoded into the radiance field, enabling novel view synthesis from any viewpoint. On both object-centric and bounded scene-level datasets, we show that LEAP significantly outperforms prior methods when they employ predicted poses from state-of-the-art pose estimators. Notably, LEAP performs on par with prior approaches that use ground-truth poses while running $400\times$ faster than PixelNeRF. We show LEAP generalizes to novel object categories and scenes, and learns knowledge closely resembles epipolar geometry. 1 INTRODUCTION In 3D vision, camera poses offer powerful explicit geometric priors to connect 3D points and 2D pixels (Zisserman, 2001). Its effectiveness has been verified across a spectrum of 3D vision tasks (Goesele et al., 2006; Geiger et al., 2011), enabling high-quality 3D modeling (Mildenhall et al., 2020; Wang et al., 2021a). However, accurate camera poses are not always available in the real world, and inaccurate poses lead to degraded performance (Lin et al., 2021). To obtain accurate camera poses, one solution is capturing dense views and applying structure-from-motion techniques (Schönberger & Frahm, 2016). Nevertheless, in real-world scenarios, like product images in online stores, we usually observe sparse images captured by wide-baseline cameras. For sparse views, estimating accurate camera poses is still challenging (Zhang et al., 2022a). Then a question arises: is using noisy estimated camera poses still the best choice for 3D modeling from sparse and unposed views? In this paper, we present LEAP, which champions a pose-free paradigm. Instead of pursuing a more accurate camera pose estimator, LEAP challenges the prevailing notion that camera poses are indispensable for 3D modeling. LEAP abandons any operations that explicitly use camera poses, e.g. projection, and learns the pose-related geometric knowledge/representations from data. Thus, LEAP is entirely liberated from camera pose errors during inference, leading to better performance. LEAP specifically represents each scene as a neural radiance field, which is predicted in a single feed-forward step. To initialize the radiance field, we introduce a neural volume, which is shared across all scenes. Each voxel grid of the volume is parameterized to learn the geometry and texture priors from data. For any incoming scene, the neural volume queries input 2D image features and gets updated through aggregation. Instead of using camera poses to identify source 2D pixels to aggregate (Yu et al., 2021), LEAP leverages attention to aggregate all 2D image features with adaptive weights. Subsequently, LEAP performs spatial-aware attention on the updated neural volume to capture long-range geometry dependency. We iterate the process of aggregating and 3D reasoning, resulting in a refined neural volume. The refined neural volume is then decoded into the radiance field. An important issue is which reference 3D coordinate frame we should use to define the neural volume. A good choice of this 3D coordinate frame can significantly stabilize and enhance learning (Qi et al., 2017; Deng et al., 2021). As the world coordinate frame for an unposed image set is not well-defined, we instead use a local coordinate frame. Specifically, we choose an arbitrary input image as the canonical view, and the neural volume is defined in its corresponding local camera coordinate. The camera pose of the canonical view is fixed, e.g. as an identity pose, in the local camera coordinate frame. To enable the model aware of the choice of canonical view, we find the key is making 2D image features of non-canonical views consistent with the canonical view. Thus, we design a multi-view encoder to improve the consistency by capturing cross-view 2D correlations. During training, the canonical view is randomized among all input views. We train LEAP with 2D rendering loss of the input views, using ground-truth camera poses to render them. Note that these ground-truth camera poses are only used during training to learn the mapping from input images to the neural volume. During inference, LEAP predicts the radiance field without reliance on any poses. We perform a thorough evaluation of LEAP on a diverse array of object-centric (Wu et al., 2023; Jiang et al., 2022; Deitke et al., 2022) and scene-level (Jensen et al., 2014) datasets. This assessment spans multiple data scales and incorporates both synthetic and real images. Experimental results highlight LEAP’s four interesting properties: i) **Superior performance**. LEAP consistently synthesizes novel views from $2 \sim 5$ unposed images. It surpasses prior generalizable NeRFs when they use camera poses predicted by SOTA pose estimators. It performs on par with methods using ground-truth camera poses. ii) **Fast inference speed**. LEAP constructs the radiance field in a feed-forward manner without optimization, running within one second on a single consumer-grade GPU. iii) **Strong generalization capability**. LEAP models novel-category objects accurately. The model learned on large object-centric datasets transfer well to the scene-level DTU dataset. iv) **Interpretable learned priors**. While LEAP does not explicitly use camera poses by design, it acquires priors consistent with epipolar geometry. We are committed to releasing code for reproducibility and future research. ## 2 RELATED WORK ### NeRF from sparse views with ground-truth camera poses. NeRF variants that work on sparse view inputs can be categorized into two genres. The first is **scene-specific NeRFs**. Following the original NeRF setting, these methods optimize the radiance field for each scene from scratch. They use additional information to regularize NeRF optimization, e.g., normalization flow (Niemeyer et al., 2022) and frequency regularization (Yang et al., 2023). The second is **generalizable NeRF variants** (Yu et al., 2021; Wang et al., 2021b; Chen et al., 2021a), which predict the radiance field in a feed-forward manner. The key is making the radiance fields conditioned on the 2D image features. Typically, these approaches project the 3D points on the input images using camera poses, and information is aggregated from the image features at projected 2D locations. Thus, they are generalizable and transferable to novel scenes by training on curated datasets. However, these methods lack reasoning of correlations between 3D points and assume the access of ground-truth camera poses. In contrast, LEAP has 3D reasoning ability, which works on images without poses. ### Sparse-view camera pose estimation. Estimating the camera poses from sparse views presents a significantly greater challenge than from dense views. The complexity arises from the minimal or absent overlap between images, which hampers the formation of cross-view correspondence cues, vital for accurate camera pose estimation (Zisserman, 2001). RelPose (Zhang et al., 2022a) highlights the limitations of conventional dense-based camera pose estimation techniques, e.g., COLMAP (Schönberger & Frahm, 2016), in sparse view contexts. In response, it introduces an energy-based model to handle multi-modal solutions indicative of potential camera poses. A subsequent method (Lin et al., 2023) further develops this approach by harnessing multi-view information to enhance pose estimation accuracy. Concurrently, SparsePose employs a pre-trained foundational model, namely DINO (Caron et al., 2021), to iteratively refine the predictions of noisy camera poses. Besides, researchers also have explored directional representation (Chen et al., 2021b), stronger image matching techniques (Sun et al., 2021) or using image matching priors (Rockwell et al., 2022), and co-visibility (Hutchcroft et al., 2022) to improve the performance. In contrast, our LEAP operates without dedicated camera pose estimation modules. NeRF with imperfect or no camera poses. Building NeRF from images without precise camera poses is challenging, given that many NeRF variants rely on pose-based geometric operations. To tackle this problem, scene-specific NeRFs (Lin et al., 2021; Wang et al., 2021c; Xia et al., 2022; Bian et al., 2022; Zhang et al., 2022b; Meng et al., 2021) treat camera poses as modifiable parameters, jointly optimizing them alongside the radiance field. Yet, these methods require dense views, and they either require reasonably accurate initial poses or assume small-baseline cameras to work. SPARF (Truong et al., 2022) leverages dense 2D image correspondences derived from existing models to augment radiance field optimization. Nevertheless, its efficacy heavily hinges on the precision of both dense correspondences and initial poses. For generalizable NeRF variants, SRT (Sajjadi et al., 2022b) proposes a pose-free paradigm building a 2D latent scene representation, but SRT is not 3D-aware, and its novel view synthesis quality is limited. RUST further deals with the necessity of target camera pose by prompting the model with a partial target image (Sajjadi et al., 2022a). FORGE (Jiang et al., 2022) jointly estimates camera poses and predicts the radiance field, leveraging their synergy to improve the performance of both. However, the performance is sensitive to pose estimation precision and training FORGE in multi-stages is non-trivial. In contrast, our proposed LEAP benefits from the 3D-aware designs and leans into a feature-similarity-driven 2D-3D information mapping. This approach eliminates reliance on camera poses during inference, yielding results more closely aligned with using ground-truth poses. 3 OVERVIEW We focus on novel view synthesis from sparse views of a scene without camera poses. Prior approaches have adjusted NeRF for sparse views under the assumption of accurate camera poses (Yu et al., 2021; Chen et al., 2021a; Niemeyer et al., 2022; Yang et al., 2023). Concurrently, enhanced camera pose estimation methods for these sparse images have emerged (Zhang et al., 2022a). However, preliminary results for combining the efforts indicate a potential incompatibility; minor pose estimation inaccuracies can significantly degrade the quality of synthesized views in NeRF (Truong et al., 2022; Jiang et al., 2022; Sinha et al., 2022). We first diagnose the limitations of existing approaches. As illustrated in Fig. 2, the existing generalizable NeRF approaches (Yu et al., 2021; Wang et al., 2021b) rely on camera poses for performing 2D-3D information mapping. Specifically, these methods project a 3D point to its single corresponding 2D locations in each of the input images based on camera poses, and aggregate features at these projected locations. Consequently, any pose inaccuracies distort this 3D-2D association, leading to compromised 3D point features, which are used to predict the radiance. Figure 3: LEAP overview. LEAP extracts image features of all inputs using a ViT backbone. The first image in the image set is selected as the canonical view. The neural volume is defined in the local camera coordinate of the canonical view, which has learnable parameters to encode geometry and texture priors. To make LEAP aware of the choice of canonical view, we use a Multi-View Encoder to propagate information from the canonical view to the non-canonical views, making the 2D representations more coherent across views. Then the neural volume is updated by querying the 2D image features of all images using a 2D-3D Information Mapping module. We decode the neural volume into the radiance field and render novel views at inference time. In contrast, LEAP proposes a novel pose-free paradigm, eliminating the influence of any pose inaccuracies. At its core, LEAP establishes the 3D-2D association based on feature similarities, enabling a 3D point to aggregate information from all pixels, rather than its 2D projection only. For each 3D voxel grid, the pose-free aggregation will learn to adaptively assign larger weights to its corresponding 2D pixels. We introduce the details of LEAP architecture in the following section. 4 METHOD We present the task formulation and the overview of LEAP. Given a set of $k$ 2D image observations of a scene, represented as $\{I_i | i = 1, \ldots, k\}$, LEAP predicts a neural radiance field (Mildenhall et al., 2020), which enables synthesizing a 2D image from an arbitrary target viewpoint. Note that in our setup of sparse source views captured by wide-baseline cameras, the number $k$ is typically less than 5. Moreover, these views are presented without any associated camera pose information at inference. 4.1 MODEL ARCHITECTURE As illustrated in Fig. 3, LEAP initiates by extracting 2D image features from all views. We use a DINOv2-initialized ViT (Oquab et al., 2023; Dosovitskiy et al., 2020) as the feature extractor since it demonstrates strong capability in modeling cross-view correlations (Zhang et al., 2023). We denote the image features of $I_i$ as $f_i \in \mathbb{R}^{h \times w \times c}$, and the resulting features set for all input views as $\{f_i | i = 1, \ldots, k\}$. Due to the unawareness of the world coordinate frame on unposed images, we perform 3D modeling in a local camera coordinate frame. Specifically, we designate one image as the canonical view, where the neural volume and radiance field are defined in its local coordinate frame. During training, we randomly select a canonical view and denote it as $I_1$ for notation clarity. To make LEAP aware of the choice of the canonical view, we find the key is to make the features of the non-canonical views consistent with the canonical view. We propose a multi-view image encoder to improve the feature consistency. Then, LEAP introduces a learnable neural volume, which is shared across scenes, to encode the geometric and textural prior. The neural volume serves as the initial 3D representation for all scenes. For each incoming scene, by querying the multi-view features, LEAP maps the 2D information to the 3D domain, represented by an updated neural volume. Finally, LEAP predicts the radiance field from the updated neural volume. We describe each step as follows. Multi-view Image Encoder makes LEAP aware of the choice of the canonical view, by performing multi-view information reasoning. It takes in image features of all views and refines them by capturing cross-view correlations. It consists of $n_e$ blocks, and each block has two layers: a Non-canonical View Update (NVU) layer and a Global Consensus Reasoning (GCR) layer. The NVU layer updates each of the non-canonical view features by aggregating the canonical view features. It is denoted as $\tilde{f}_j = \text{NVU}(f_j, f_1)$, where $j \neq 1$ and $\tilde{f}$ denotes the updated features. The GCR layer performs joint reasoning on all views for a global consensus, which leverages the correlation between all views. We implemented the two layers with Transformer layers (Vaswani et al., 2017). Specifically, the NVU layer is modeled as a Transformer layer with cross-attention, where the non-canonical view features are queries, and the canonical view features are keys/values. It is formulated as \[ \begin{bmatrix} \tilde{f}_2, ..., \tilde{f}_k \end{bmatrix} = \text{FFN}(\text{CrossAttention}([\tilde{f}_2, ..., \tilde{f}_k], f_1)), \] where FFN is a feed-forward network, and \([ \cdot ]\) denotes the concatenating operation over tokenized image features. For clarity, \([\tilde{f}_2, ..., \tilde{f}_k]\) is in \( \mathbb{R}^{(k-1)hw \times c} \) and \(f_1\) is flattened into \( \mathbb{R}^{hw \times c} \). Similarly, the GCR layer is instantiated by a Transformer layer with self-attention, where the query, key, and value are the 2D image features of all views. It is formulated as \[ \begin{bmatrix} \tilde{f}_1, ..., \tilde{f}_k \end{bmatrix} = \text{FFN}(\text{SelfAttention}([\tilde{f}_1, \tilde{f}_2, ..., \tilde{f}_k])). \] Specifically, \([\tilde{f}_1, ..., \tilde{f}_k]\) is in \( \mathbb{R}^{khw \times c} \). For simplicity, we denote the final output \([\tilde{f}_1, ..., \tilde{f}_k]\) as \(F\). **2D-3D Information Mapping.** LEAP introduces a 3D latent neural volume \(V \in \mathbb{R}^{H \times W \times D \times c}\) to encode the geometry and texture priors, where \(H, W, D\) are the spatial resolution of the volume. It is defined in the local camera coordinate of the canonical view. The neural volume is shared across different scenes and gets updated by mapping the 2D image information to the 3D domain. To perform the 2D-3D information mapping, we use \(n_m\) Transformer Decoder blocks (Vaswani et al., 2017), each of which consists of a cross-attention layer and a self-attention layer. In the cross-attention layer, we use the 3D latent volume \(V\) as the query and use \(F\) as the key/value. The updated 3D neural volume \(V\) is defined as \( \tilde{V} = \text{FFN}(\text{CrossAttention}(V, F)) \). Intuitively, for each 3D point belonging to the neural volume, we compute its feature similarity with all 2D image features and use the similarity to get the weighted average of 2D image features. Subsequently, the self-attention layer performs refinement on the 3D volume features, capturing long-range geometry correlation. With multiple 2D-3D information lifting blocks, LEAP learns to update the latent volume with a fixed resolution in a coarse-to-fine manner. We denote the updated neural volume as \( \tilde{V} \in \mathbb{R}^{H \times W \times D \times c} \). **Neural Rendering.** With the updated neural volume \( \tilde{V} \), LEAP predicts a volume-based neural radiance field (Yu et al., 2022; Jiang et al., 2022). The radiance field is denoted as \(R := (R_\sigma, R_f)\), where \(R_\sigma\) and \(R_f\) are the density and features of the radiance field. \(R_\sigma \in \mathbb{R}^{H' \times W' \times D'}\) and \(R_f \in \mathbb{R}^{H' \times W' \times D' \times C}\), where \(H', W'\), and \(D'\) are spatial resolutions. Both \(R_\sigma\) and \(R_f\) are predicted from \( \tilde{V} \) using 3D convolution layers. We read out the rendered image \(I\) and object mask \( \hat{\sigma} \) using volumetric rendering techniques (Mildenhall et al., 2020). In detail, we first render a feature map and predict the rendered image using 2D convolutions. Formally, \((I, \hat{\sigma}) = \Pi(R, \Phi)\), where \( \Pi \) denotes the volumetric rendering process, and \( \Phi \) is the target camera pose. ### 4.2 Training and Inference of LEAP **Training.** LEAP is trained with the photo-metric loss between the rendering results and the inputs without any 3D supervision. We first define the loss \(L_I\) applied on the RGB images, where \[ L_I = \sum_i L_{mse}(I_i, I_r) + \lambda_p L_p(I_i, I_r). \] The \(L_{mse}\) is the MSE loss, \(I_i\) and \(I_r\) are the original and rendered input images, \(\lambda_p\) is a hyper-parameter used for balancing losses, and \(L_p\) is the perceptual loss (Johnson et al., 2016). We then define the loss \(L_M\) applied on the density masks, as \[ L_M = \sum_i L_{mse}(\hat{\sigma}_i, \sigma_i), \] where \( \hat{\sigma}_i \) and \( \sigma_i \) are original and rendered density masks. The final loss is defined as \[ L = L_I + \lambda_m L_M, \] where \(\lambda_m\) is the weight balancing hyperparameter. We only use \(L_I\) if the masks are not available. We use ground-truth camera poses of training scenes to render the predicted inputs. **Inference and Evaluation.** During inference, LEAP predicts the radiance field without reliance on any poses. To evaluate the novel view synthesis quality, we use the testing camera poses to render the radiance field under specific viewpoints. We use the relative pose system for novel view synthesis. ### 5 Experiment We introduce our evaluation results on diverse and large-scale datasets, including both object-centric and scene-level datasets, for comparison with prior arts. **Implementation Details.** We consider the number of views to be \(k = 5\), with image resolution 224. We set \(\lambda_p = 0.1\) and \(\lambda_m = 5.0\). We set the peak learning rate as \(2e^{-5}\) (for the backbone) and \(2e^{-4}\) (for other components) with a warmup for 500 iterations using AdamW optimizer (Loshchilov & Hutter, 2017). We train the model for 150k iterations and use a linear learning rate scheduler, where the batch size is 32. LEAP has $n_e = 2$ multi-view encoder blocks and $n_m = 4$ 2D-3D mapping blocks. The resolution of the 3D neural volume and the volume-based radiance fields are $16^3$ and $64^3$, respectively. We sample 64 points on each ray for rendering. **Datasets.** We train LEAP on each of the following datasets and test its capability to model the 3D object/scenes on each dataset that has different properties. We note that these datasets are captured by wide-baseline cameras, with randomly sampled or fixed camera poses that are far from each other. - **OmniObject3D** (Wu et al., 2023) contains daily objects from 217 categories. We use a subset with 4800 instances for training and 498 instances for testing. OmniObject3D contains objects with complicated and realistic textures. - **Kubric-ShapeNet** (Jiang et al., 2022) is a synthetic dataset generated using Kubric (Greff et al., 2022). Its training set has 1000 instances for each of 13 ShapeNet (Chang et al., 2015) categories, resulting in 13000 training samples. Its test set is composed of two parts: i) 1300 object instances from training categories; ii) 1000 object instances from 10 novel object categories. The two subsets are used to test the reconstruction quality and generalization ability of models. This dataset contains objects with complicated geometry but simple textures. - **Objaverse** (Deitke et al., 2022) is one of the largest object-centric datasets. We use a subset of $200k$ and $2k$ instances for training and testing, used to validate LEAP on large-scale data. - **DTU dataset** (Jensen et al., 2014) is a real scene-level dataset. DTU is on a small scale, containing only 88 scenes for training, which tests the ability of LEAP to fit small-scale data. **Metrics.** Following previous works, we use standard novel view synthesis metrics, including PSNR (in dB), SSIM (Wang et al., 2004) and LPIPS (Zhang et al., 2018). **Baselines.** We compare LEAP with the following baselines. We note that we train each baseline model (except Zero123) on each of the datasets using the same setting with LEAP for fair comparisons. We use official or officially verified implementations of all baselines. - **PixelNeRF** (Yu et al., 2021) is a generalizable NeRF variant, using camera poses to correlate 3D points and 2D pixels. We experiment with both ground-truth poses and predicted poses (with ground-truth translations) from a state-of-the-art pose estimator RelPose (Zhang et al., 2022a). - **FORGE** (Jiang et al., 2022) is a generalizable NeRF variant with test-time optimization, which jointly predicts camera poses and the neural radiance field, and leverages their synergy to improve the performance of both. We experiment with FORGE using ground-truth and its predicted poses. - **SPARF** (Truong et al., 2022) is a scene-specific NeRF variant that jointly optimizes the camera poses and the radiance field. It requires reasonable pose initialization and is dependent on dense visual correspondences predicted from off-the-shelf methods. - **SRT** (Sajjadi et al., 2022b) uses only 2D representation to perform novel view synthesis. It is trained and tested on unposed image sets. - **Zero123** (Liu et al., 2023) is a novel view synthesis method using diffusion models. We note that Zero123 takes a single image as input, which is different from LEAP and other baselines. We test Zero123 to compare LEAP with large-scale 2D models. ### 5.1 Comparisons with State-of-the-Art **Object-centric Results.** The results are shown in Table 1. On all four test sets, LEAP outperforms all prior pose-free works and pose-based works (with estimated poses). The results demonstrate the success of LEAP for modeling objects with different geometry and texture properties. In detail, LEAP improves over the next-best baseline (FORGE) by about 3 dB PSNR and 50% LPIPS relatively Figure 5: Comparison with prior arts. The performance of PixelNeRF degenerates dramatically with the state-of-the-art pose estimator. FORGE benefits from its joint optimization of shape and pose, but the high-frequency details are lost. SRT can only recover noisy results. Zero123 can synthesize high-quality images, while the content is not consistent with the inputs. In contrast, LEAP reliably recovers the details and the novel views match the ground-truth target view well. We also include zoom-in results on the right for a clearer comparison. Table 1: Evaluation on four object-centric test sets. We include the inference time of each method. X means the method is pose-free. For experiments without using perfect poses, we highlight the best and second-best results. For experiments with perfect poses, we also highlight the best GT pose result if it is better than ours. | Model | Pose Inf. Time | Omniobject3D | Kubric-ShapeNet-seen | Kubric-ShapeNet-novel | Objaverse | |---------|----------------|--------------|----------------------|-----------------------|-----------| | | | PSNR, SSIM, LPIPS | PSNR, SSIM, LPIPS | PSNR, SSIM, LPIPS | PSNR, SSIM, LPIPS | | PixelNeRF | GT 2 min | 26.97 0.888 0.123 | 29.25 0.893 0.127 | 29.37 0.906 0.122 | 26.21 0.871 0.133 | | | Pred. 2 min | 18.87 0.810 0.199 | 21.36 0.836 0.188 | 21.22 0.851 0.174 | 20.97 0.819 0.191 | | FORGE | GT 0.05 sec | 28.93 0.913 0.087 | 31.32 0.938 0.053 | 31.17 0.946 0.058 | 27.76 0.896 0.100 | | | Pred. 15 min | 26.56 0.889 0.108 | 26.61 0.896 0.106 | 25.57 0.898 0.107 | 23.67 0.856 0.226 | | SRT | X 0.4 sec | 20.22 0.786 0.303 | 22.62 0.802 0.267 | 22.46 0.793 0.284 | 20.41 0.798 0.312 | | Zero123 | X 27 sec | 16.77 0.812 0.147 | 14.42 0.803 0.174 | 15.51 0.837 0.152 | 19.59 0.862 0.110 | | LEAP | X 0.3 sec | 29.10 0.919 0.057 | 29.86 0.929 0.067 | 28.22 0.924 0.070 | 26.77 0.887 0.103 | on all datasets. Furthermore, without the need for any test-time refinement, the running speed of LEAP is significantly faster than FORGE (0.3-second v.s. 15 min). Besides, LEAP demonstrates strong generalization capability, as the model trained on the Kubric ShapeNet dataset of only 13 categories is able to work on novel ShapeNet categories with nearly no gap. Interestingly, when compared with prior pose-based methods using ground-truth poses, LEAP exhibits a comparable or even better performance. This result verifies our proposition that camera poses may not be necessary for 3D modeling, at least in the sparse-view setting. We present a visualization of our results in Fig. 4 and the comparison with prior works in Fig. 5. Scene-level Results. The results are shown in Table 2. Since the DTU dataset is too small to train a usable pose estimator, we follow SPARF to use different levels of noisy poses. Our method outperforms PixelNeRF with noisy poses and achieves comparable results with SPARF. We note that SPARF is a scene-specific NeRF variant that takes much longer time to optimize the radiance field and requires additional inputs, i.e., accurate dense visual correspondences between input views. We include a qualitative comparison in Fig. 6. Besides, we also observe that a compelling phenomenon - a LEAP model pre-trained on the large-scale object-centric datasets largely improves its performance on the scene-level evaluation. The reason is that as a pose-free method with our geometric inductive bias, LEAP requires learning the knowledge on larger data compared with pose-based works. Training from scratch on the small-scale DTU dataset, which only contains 88 scenes for training, leads to unsatisfying performance. On the other hand, the effectiveness of the pre-training demonstrates the capability of LEAP to learn general-purpose 3D knowledge, which is generalizable and can be transferred to novel domains. 5.2 Ablation Study and Insights We present an ablation study on each block of LEAP to study their impacts. Figure 6: **Comparison with prior arts on DTU dataset.** PixelNeRF collapses under noisy poses. SPARF recovers high-frequency details well, but it degenerates when the correspondences are not accurate and demonstrates strong artifacts (shown in red boxes). LEAP reliably recovers the geometry well but lacks texture details. The result implies that LEAP, as a pose-free method, requires larger training datasets. Table 2: **Evaluation on the DTU dataset.** LEAP performs on par with SPARF which requires slow per-scene optimization and additional dense image correspondence inputs. Numbers with * are after SPARF optimization. | Method | Generalizable | Image-only Pose Noise | Rot Err. | Trans Err. | PSNR↑ | SSIM↑ | LPIPS↑ | Inference Time | |-----------------|--------------|-----------------------|----------|------------|-------|-------|--------|----------------| | PixelNeRF | ✓ | ✓ | GT | – | – | 19.60 | 0.720 | 0.295 | | | | | σ=0.05 | 5.03 | 0.17 | 14.42 | 0.486 | 0.463 | | | | | σ=0.15 | 14.31 | 0.42 | 10.78 | 0.432 | 0.538 | | SPARF | ✗ | ✗ | GT | – | – | 19.79 | 0.749 | 0.275 | | | | | σ=0.05 | 1.31* | 0.04* | 18.57 | 0.682 | 0.336 | | | | | σ=0.15 | 1.93* | 0.06* | 18.03 | 0.668 | 0.361 | | LEAP | ✓ | ✓ | – | – | – | 15.37 | 0.535 | 0.478 | | LEAP-pretrain | ✓ | ✓ | – | – | – | 18.07 | 0.671 | 0.3 sec | **Coordinate Frame.** We study the importance of the local camera coordinate frame by using the world coordinate frame instead. We use the category-level coordinate as the world coordinate, where objects have aligned rotation and are zero-centered. As shown in Table 3 (a), the model demonstrates better performance on the seen object categories (with 31.23 v.s. 29.10 PSNR) but generalizes worse on novel categories. We conjecture the reason is that the aligned rotation/translation world coordinate frame makes it easier to perform 2D-3D information mapping for training categories. However, it also limits the performance of novel categories, as their category-level coordinate frames are not learned by LEAP. This result matches our intuition of using the local camera coordinate frame to define the neural volume, which enables LEAP to generalize to any objects/scenes. **Multi-view Encoder.** We explore the impact of using the multi-view encoder to make LEAP aware of the choice of the canonical view. We test the following alternatives: i) LEAP without the multi-view encoder; ii) LEAP that only has the global consensus reasoning (GCR) layers; iii) LEAP that only has the non-canonical view update (NVU) layers. As shown in Table 3 (b)-(d), without the multi-view encoder, we observe a significant performance drop. The reason is that the inconsistent features across views hamper the 2D-3D information mapping. Similarly, only with the GCR layers, LEAP struggles to determine which view is the canonical view. When only using the NVU layers, it achieves a slightly worse performance than the full model. The experiments show the effectiveness of using the multi-view encoder to make the model aware of the choice of canonical view. **The 2D-3D Information Mapping Layer.** As shown in Table 3 (e), using two mapping layers (the default has four layers) slightly degenerates the performance, which shows its efficacy. **Interpreting LEAP.** We perform visualization to understand what knowledge LEAP learns to handle the absence of camera poses. As shown in Fig. 7, LEAP adaptively assigns weights to reasonable 2D regions to perform 2D-2D reasoning and 2D-3D information mapping. The neural volume is updated in a coarse-to-fine manner during the process. Moreover, we test how the learned knowledge is related to explicit pose-based operations. As shown in Fig. 8, we input images of a small dot. We find that LEAP lifts the 2D pixel of the dot into the 3D space as a line segment. The location of the line segment projected in another view corresponds to its epipolar line. The phenomenon reveals that LEAP lifts a 2D point as its reprojection ray, and it leverages multiview... Figure 7: Visualization of LEAP working mechanism. (Left) We show the 2D-2D attention weights of the multi-view encoder. For each query pixel (in red) in the canonical view, it assigns larger weights to the corresponding regions in the Non-canonical views. The attention of query points in the background diffuses. (Middle) We visualize the learned neural volume by using PCA and slicing along the three axes. As our 3D modeling happens in the local coordinate frame, which is not axis-aligned, the learned embeddings show isotropic properties. The neural volume is refined in a coarse-to-fine manner, where the object boundary becomes more compact after more mapping layers. (Right) We show the attention map between 3D-2D. As shown in the top two rows, the neighbor on-surface voxels have similar attention patterns on a specific 2D object region. The attention of an out-of-object voxel diffuses on the background 2D region. See more details in the supplementary. Table 3: Ablation study on the Kubric dataset. We ablate on a) using category-level world canonical space rather than the camera space. b)-d) the multi-view encoder designs for enabling the model aware of canonical view choice. And e) using less (#2/4) mapping layers. See visualization in supplementary. | | ShapeNet-novel | |----------------|---------------| | | PSNR | SSIM | LPIPS | | LEAP (full) | 28.22 | 0.924 | 0.072 | | a) use world frame | 24.62 | 0.865 | 0.116 | | b) no multi-view enc. | 21.68 | 0.710 | 0.431 | | c) only GCR layer | 22.98 | 0.770 | 0.359 | | d) only NNU layer | 27.62 | 0.907 | 0.085 | | e) two mapping layers | 27.99 | 0.910 | 0.080 | Figure 8: Interpret LEAP. Top: We input images of a small dot (in orange boxes), and the visualization of the reconstructed neural volume shows consistency with the epipolar lines of the small dot on target views. This implies LEAP mapps a 2D point as its 3D reprojection ray segment even though there are no reprojection operations. It leverages the multi-view information to resolve the depth ambiguity of the ray. Bottom: The performance with different numbers of inputs on Omniobject3D. Note that we only train the model with 5 images and it is directly tested. Information to resolve the ambiguity of the ray to determine the depth of the 2D point. Besides, we show LEAP performance with different numbers of input images. The results show that LEAP reliably reconstructs the object with two to five images, and its performance drops slightly with fewer inputs. However, we observe a big drop when we decrease the number of inputs from two images to one image. These results validate the effectiveness of LEAP in using multi-view information to perform the 3D modeling. 6 CONCLUSION We propose LEAP, a pose-free approach for 3D modeling from a set of unposed sparse-view images. By appropriately setting the 3D coordinate and aggregating 2D image features, LEAP demonstrates satisfying novel view synthesis quality. In our experiments, spanning from both object-centric to scene-level, from synthetic images to real images, and from small-scale to large-scale data, LEAP consistently demonstrates better performance compared with prior pose-based works that use estimated poses or noisy poses. LEAP also achieve comparable results with the versions of prior works that use ground-truth poses. Besides, LEAP showcases a strong generalization capability, fast inference speed, and interpretable learned knowledge. Limitations. LEAP adopts the neural volume representation where the 3D voxel grids span uniformly in the 3D space, and the physical size of the volume is bounded. Designing better 3D representation, e.g. incorporating techniques from prior works to enable it to work on unbounded scenes, will further benefit the application of LEAP. REFERENCES Wenjing Bian, Zirui Wang, Kejie Li, Jiawang Bian, and Victor Adrian Prisacariu. Nope-nerf: Optimising neural radiance field with no pose prior. *2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 4160–4169, 2022. Mathilde Caron, Hugo Touvron, Ishan Misra, Herv’e J’egou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. *2021 IEEE/CVF International Conference on Computer Vision (ICCV)*, pp. 9630–9640, 2021. Angel X. Chang, Thomas A. Funkhouser, Leonidas J. Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, L. Yi, and Fisher Yu. Shapenet: An information-rich 3d model repository. *ArXiv*, abs/1512.03012, 2015. Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang, Jingyi Yu, and Hao Su. Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. *ICCV*, pp. 14104–14113, 2021a. Kefan Chen, Noah Snavely, and Ameesh Makadia. Wide-baseline relative camera pose estimation with directional learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 3258–3268, 2021b. Matt Deitke, Dustin Schwenk, Jordi Salvador, Luca Weihs, Oscar Michel, Eli VanderBilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, and Ali Farhadi. Objaverse: A universe of annotated 3d objects. *ArXiv*, abs/2212.08051, 2022. Congyue Deng, Or Litany, Yueqi Duan, Adrien Poulenard, Andrea Tagliasacchi, and Leonidas J. Guibas. Vector neurons: A general framework for so(3)-equivariant networks. *ICCV*, pp. 12180–12189, 2021. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. *ArXiv*, abs/2010.11929, 2020. Laura Downs, Anthony Francis, Nate Koenig, Brandon Kinman, Ryan Michael Hickman, Krista Reymann, Thomas Barlow McHugh, and Vincent Vanhoucke. Google scanned objects: A high-quality dataset of 3d scanned household items. *2022 International Conference on Robotics and Automation (ICRA)*, pp. 2553–2560, 2022. Andreas Geiger, Julius Ziegler, and Christoph Stiller. Stereoscan: Dense 3d reconstruction in real-time. *2011 IEEE Intelligent Vehicles Symposium (IV)*, pp. 963–968, 2011. Michael Goesele, Brian Curless, and Steven M Seitz. Multi-view stereo revisited. In *CVPR*, volume 2, pp. 2402–2409. IEEE, 2006. Klaus Greff, Francois Belletti, Lucas Beyer, Carl Doersch, Yilun Du, Daniel Duckworth, David J. Fleet, Dan Gnanapragasam, Florian Golemo, Charles Herrmann, Thomas Kipf, Abhijit Kundu, Dmitry Lagun, Issam Hadj Laradji, Hsueh-Ti Liu, Henning Meyer, Yishu Miao, Derek Nowrouzezahrai, Cengiz Oztireli, Etienne Pot, Noha Radwan, Daniel Rebain, Sara Sabour, Mehdi S. M. Sajjadi, Matan Sela, Vincent Sitzmann, Austin Stone, Deqing Sun, Suhani Vora, Ziyu Wang, Tianhao Wu, Kwang Moo Yi, Fangcheng Zhong, and Andrea Tagliasacchi. Kubric: A scalable dataset generator. *2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 3739–3751, 2022. Will Hutchcroft, Yuguang Li, Ivaylo Boyadzhiev, Zhiqiang Wan, Haiyan Wang, and Sing Bing Kang. Covispose: Co-visibility pose transformer for wide-baseline relative pose estimation in 360° indoor panoramas. In *European Conference on Computer Vision*, pp. 615–633. Springer, 2022. Rasmus Ramsøl Jensen, A. Dahl, George Vogiatzis, Engil Tola, and Henrik Aanæs. Large scale multi-view stereopsis evaluation. *CVPR*, pp. 406–413, 2014. Hanwen Jiang, Zhenyu Jiang, Kristen Grauman, and Yuke Zhu. Few-view object reconstruction with unknown categories and camera poses. *ArXiv*, abs/2212.04492, 2022.
PFdjJiZjPj
In the “All-generated” setting, each problem is associated with multiple solutions. Do you give all of these simultaneously to the model and ask it to generate tests? Or one at a time, with one set of tests per solution?
THE PROGRAM TESTING ABILITY OF LARGE LANGUAGE MODELS FOR CODE Anonymous authors Paper under double-blind review ABSTRACT Recent development of large language models (LLMs) for code like CodeX and CodeT5+ demonstrates tremendous promise in achieving code intelligence. Their ability of synthesizing code that completes a program for performing a pre-defined task has been intensively tested and verified on benchmark datasets including HumanEval and MBPP. Yet, evaluation of these LLMs from more perspectives (than just program synthesis) is also anticipated, considering their broad scope of applications in software engineering. In this paper, we explore the ability of LLMs for testing programs/code. By performing thorough analyses of recent LLMs for code in program testing, we show a series of intriguing properties of these models and demonstrate how program testing ability of LLMs can be improved. Following recent work which utilizes generated test cases to enhance program synthesis, we further leverage our findings in improving the quality of the synthesized programs and show +11.77% and +4.22% higher code pass rates on HumanEval+ comparing with the GPT-3.5-turbo baseline and the recent state-of-the-art, respectively. 1 INTRODUCTION The community has witnessed a surge in the development of large language models (LLMs), which have achieved incredible ability in understanding and generating not only texts but also code. LLMs for code (CodeX [Chen et al., 2021], StarCoder [Li et al., 2023b], CodeT5+ [Wang et al., 2023b], etc) have been widely adopted to a variety of applications to achieve code intelligence. However, current evaluation of these LLMs mostly focuses on program completion/synthesis, despite the models can also be utilized in other applications. As the field continues to advance, evaluation of these models from more perspectives is anticipated, which could facilitate deeper understanding of the LLMs. The ability of automatically generating proper test suites is of great desire to software engineering, yet challenging. Being learning-based or not, current test generation efforts (e.g., fuzzing) primarily focus on creating diverse test inputs to identify faults in the code as much as possible via maximizing their coverage, e.g., line coverage and branch coverage [Fioraldi et al., 2020; Tufano et al., 2022; Dinella et al., 2022; Lemieux et al., 2023; Xia et al., 2023]. Although such test inputs try to verify the (non-)existence of crashes and hangs of the tested code, they lack the ability of determining whether the code adheres to the aim of the function which is represented by input-output relationships. Automatic test case generation for this aim not only requires an high coverage of the code being tested but also necessitates a correct understanding of the “true” desired input-output relationships in the tested code, leaving it a challenging open problem. Being capable of synthesizing correct code implementations given docstrings, LLMs for code seem capable of understanding the desired input-output relationship of a function described in natural language. This capability inspires applying these LLMs to generating test cases automatically [Chen et al., 2021]. However, the ability of these models for program testing has not been systematically evaluated. In this paper, we systematically compare the ability of recent LLMs for code in testing from two perspectives focusing on both the correctness and diversity of the test cases, considering that 1) program testing is of great interest in software engineering and software security as mentioned and 2) automatically generated test cases can further be adopted to improve the program synthesis performance [Chen et al., 2023]. Our analyses focus on algorithmic coding, based on the popular 164 problems from HumanEval+ [Liu et al., 2023a] and 427 sanitized problems from MBPP [Austin et al., 2021]. It is worth noting that the model may encounter various scenarios when generating test cases. It may generate test cases when provided with only natural language descriptions of the desire of the program, or it could generate test cases when given an “optimal” oracle implementation. In more complex situations, it may even need to test its own imperfect generated code or the code generated by other models. We consider 4 test-case generation settings (i.e., “self-generated” which uses each LLM to test code synthesized by the LLM itself, “all-generated” which lets all LLMs to test the same code synthesized by a group of four LLMs, “oracle” which tests an oracle implementation, and the “placeholder” in Figure1) and test a collection of 11 competitive LLMs for code. We conducted a variety of experiments, from which intriguing takeaway messages are delivered. As previously mentioned, several very recent papers (Shi et al., 2022; Li et al., 2023a; Chen et al., 2023) have shown that appropriate usage of generated test cases can improve the quality of program synthesis. Yet, the quality of generated test cases largely impacts the performance of such methods. Due to the lack of systematic evaluation of the testing ability of LLMs for code, it is unclear how to craft test cases that could be potentially more helpful to program synthesis. The studies in this paper also shed light on this. We will show that, substantially improved program synthesis performance can be gained by utilizing takeaway messages in our studies. Specifically, we can achieve +11.77% higher code pass rate on HumanEval+, in comparison with the GPT-3.5-turbo baseline. Compared with a very recent state-of-the-art called CodeT, our solution gains +4.22% higher code pass rate. 2 EVALUATION METRICS To make the evaluation more reliable and comprehensive, it is crucial to first design some suitable metrics, like BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), and the pass rate (Chen et al., 2021) for evaluating machine translation, text summarization, and program synthesis, respectively. In this section, we specify two main evaluation metrics to evaluate the program testing ability of LLMs, from the perspective of correctness and diversity. Pass rate In software engineering, we expect test cases to represent some desired “ground-truth” functionality of the tested program/code. In practice, such “ground-truth” functionality can be described in the header comments of a function (i.e., docstrings of the function) and tested using the oracle implementation, as in HumanEval (Chen et al., 2021) and MBPP (Austin et al., 2021). The oracle program/code should be able to pass the test, if a generated test case is correct. Therefore, we leverage the pass rate as a measure to evaluate the correctness of the generated test cases. For a fair comparison, we instruct each model to generate three test cases in the prompt, and, when a model generates more than three test cases, we select the first three for evaluation. Assuming that there are in total $M$ programming problems in an experimental dataset and, for each problem, we have $N$ program/code implementations to be generated test cases for. Each model has only one chance to generate these test cases for each program/code. Then, we calculate the pass rate as: $$P = \frac{1}{MN} \sum_{i=1}^{M} \sum_{j=1}^{N} \frac{p_{ij}}{n_{ij}},$$ where $n_{ij}$ is the number of test cases in $Q_{ij}$ which includes no more than three test cases generated for the $j$-th program/code implementation of the $i$-th problem by the evaluated LLM at once, i.e., $Q_{ij} = \{(x_{ijk}, y_{ijk})\}_k$, and $p_{ij}$ is the number of test cases (in $Q_{ij}$) that do not fail the oracle. The pass rate defined in Eq. (1) measures correctness of the generated test cases. However, as can be seen in Figure1, the model can generate duplicate test cases that are less helpful, even though they are correct. To avoid such an evaluation bias, we further advocate deduplication in the set of test cases that are considered as correct, which leads to computation of a deduplicated pass rate defined as $P' = \frac{1}{MN} \sum \sum p'_{ij}/n'_{ij}$, where we use ‘ to denote the numbers of unique test cases. Coverage rate In addition to the above pass rates, we further consider coverage rate as a more fine-grained metric for evaluating the diversity of the generated test cases. According to its definition, converge rate computes the degree to which the code is executed, given a test case. Since, for each program/code, we keep no more than three test cases at once, we calculate how much percentage of the control structure is covered given these test cases. Similar to Eq. (1), we evaluate the performance of testing all programs/code over all $M \times N$ times of generation, i.e., we calculate $$C = \frac{1}{MN} \sum_{i=1}^{M} \sum_{j=1}^{N} c_{ij},$$ where $c_{ij}$ is the per-test-case branch coverage rate. We apply the `pytest` library to evaluate the branch coverage for all the three test cases for each code and average the results for all programs/code and all problems. Apparently, $C \leq 1$, and a higher $C$ shows better testing ability of an LLM, since we expect all parts of the programs/code to be executed to find our all potential bugs. 3 LARGE LANGUAGE MODELS FOR CODE In this section, we outline the evaluated models. We adopt some “small” models whose numbers of parameters are around 1B (to be more specific, from 770M to 1.3B in our choices) and some larger models that achieve state-of-the-art performance in the task of program synthesis. For the small models, we use InCoder (1.3B) (Fried et al., 2023), CodeGen2 (1B) (Nijkamp et al., 2023a), CodeT5+ (770M) (Wang et al., 2023b), and SantaCoder (1.1B) (Allal et al., 2023). InCoder is a unified generative model that can perform program/code synthesis as well as code editing, and it combines the strengths of causal language modeling and masked language modeling. The CodeGen2 model was trained on a deduplicated subset of the Stack v1.1 dataset (Kocetkov et al., 2023), and its training is formatted with a mixture of objectives for causal language modeling and span corruption. CodeT5+ is an encoder-decoder model trained on several pre-training tasks including span denoising and two variants of causal language modeling. SantaCoder was trained on the Python, Java, and JavaScript code in the Stack dataset. The pass rate (Chen et al., 2021) of programs generated by these models is compared in Table 1. When evaluating the (program) pass rate, we let the model generate 200 code implementations for each problem, and we set the temperature to 0.2, 0.6, and 0.8 for calculating pass@1, pass@10, and pass@100, respectively. As for larger models that achieve state-of-the-art program synthesis performance, we use CodeGen2 (16B) (Nijkamp et al., 2023a), CodeGen-Multi (16B) (Nijkamp et al., 2023b), CodeGen-Mono (16B) (Nijkamp et al., 2023b), StarCoder (15B) (Li et al., 2023b), WizardCoder (15B) (Luo et al., 2023), CodeGeeX2 (6B) (Zheng et al., 2023), and GPT-3.5-turbo. CodeGen-Multi and CodeGen-Mono are two large models from the first version of CodeGen. CodeGen-Multi was first trained on the pile dataset (Gao et al., 2020) and then trained on a subset of the publicly available BigQuery dataset which contains code written in C, C++, Go, Java, JavaScript, and Python. Based on the 16B CodeGen-Multi model, CodeGen-Mono (16B) was obtained by further tuning on a set of Python code collected from GitHub. Given a base model that was pre-trained on 1 trillion tokens from the Stack dataset, the 15B StarCoder model was obtained by training it on 35B tokens of Python code. WizardCoder further empowers StarCoder with instruction tuning, following a similar instruction evolution strategy as in WizardLM (Xu et al., 2023). CodeGeeX2, the second generation of a multilingual generative model for code, is implemented based on the ChatGLM2 architecture and trained on more code data. GPT-3.5-turbo is a very capable commercial LLM developed by OpenAI and we accessed it in August, 2023. For these large LLMs, we tested pass@1 of all models except GPT-3.5-turbo (whose result can be directly collected from Liu et al. (2023a)’s paper). By sorting their pass@1 from high to low, they are ranked as: GPT-3.5-turbo (61.7%), WizardCoder (46.23%, 15B), CodeGeeX2 (29.97%, 6B), StarCoder (27.9%, 15B), CodeGen-Mono (26.15%, 16B), CodeGen2 (19.33%, 16B), CodeGen-Multi (15.35%, 16B). The ranks on the MBPP dataset are similar. 4 CODE TO BE TESTED For evaluating the testing ability of LLMs, we need an oracle to express the ground-truth functionality of the tested code. Fortunately, current datasets for evaluating program synthesis performance often provide such oracles (see HumanEval (Chen et al., 2021) and MBPP (Austin et al., 2021)). In our experiments, we utilize an amended version of HumanEval called HumanEval+ (Liu et al., 2023a), together with MBPP (the sanitized version). These datasets are established to evaluate basic Python programming performance of LLMs, and they contain 164 and 427 problems, respectively. 4.1 IMPERFECT CODE IMPLEMENTATIONS In order to simulate real-world scenarios where the tested code is often buggy, we first adopt synthesized programs/code as the programs/code to be tested, considering that the synthesis of even --- 1https://pytest.org state-of-the-art LLMs is still imperfect. We evaluate the performance of each LLM in testing code that was generated by itself (which is denoted as “Self-generated”) and code in a set consisting of program completion results of several different LLMs (which is denoted by “All-generated”). That said, the compared LLMs take different code implementations when generating test cases for each programming problem in the self-generated setting. Whereas, in the all-generated setting, the same program/code implementations are given to different LLMs for generating test cases for comparison. In practice, we apply InCoder (1.3B), CodeGen2 (1B), CodeT5+ (770M), and SantaCoder (1.1B) to construct the all-generated program/code set, while, in the self-generated setting, each LLM first synthesizes code and complete a program to fulfill the requirement of each programming problem, and the LLM then generates test cases with the synthesized programs/code in its prompts. The temperature for all LLMs is uniformly set to 0.2 for synthesizing the programs/code in both settings. We obtain 100 program/code completions for each problem and we prompt each LLM to generate 3 test cases for every program/code implementation in the self-generated setting, and we sampled 100 implementations from the synthesis results of InCoder (1.3B), CodeGen2 (1B), CodeT5+ (770M), and SantaCoder (1.1B) to form the all-generated code set, i.e., we have $N = 100$ for these settings. We follow the same way of generating code as introduced in the papers of these LLMs. For model without instruction tuning, like InCoder and CodeT5+, we synthesize programs/code using the default prompt given by each programming problem in the test dataset, while, for models that have adopted instruction tuning, e.g., WizardCoder, we use the recommended prompt in their papers. ### 4.2 Optimal Code Implementations (Oracle) As a reference, we also report the performance of generating accurate and diverse test cases when the written code is perfectly correct, which is achieved by adopting the oracle as the programs/code to be tested (and such a setting is denoted by “Oracle”). Since Liu et al. (2023a) have reported that some oracle code in the HumanEval dataset can be incorrect, we adopt the amended oracle set in HumanEval+ in this setting. We further used the revised oracle code implementations instead of the original ones in evaluating the pass rate (i.e., $P'$) of the generated test cases. Considering that the public datasets often only provide one oracle implementation for each problem, and to keep the uncertainty of evaluation results consistent, we copy the oracle implementation by $100\times$ and we | Model | Size | Pass@1 | Pass@10 | Pass@100 | |-------------|--------|--------|---------|----------| | InCoder | 1.3B | 6.95% | 14.06% | 23.76% | | CodeGen2 | 1B | 9.19% | 17.50% | 25.90% | | CodeT5+ | 770M | 12.95% | 28.02% | 37.56% | | SantaCoder | 1.1B | 15.21% | 29.42% | 43.80% | Table 1: Program synthesis performance of the small LLMs (whose number of parameters is around 1 billion) evaluated on HumanEval+/MBPP (sanitized). prompt to generate 3 test cases for each of these copies. It can be regarded as letting $N = 100$, just like in the previous settings in Section 4.1. ### 4.3 No Implementation (Placeholder) In certain scenarios, we require test cases before the function/program has been fully implemented, hence we also evaluate in a setting where the main body of a tested function/program is merely a placeholder, as depicted in Figure 1(b). This scenario often occurs when the main code has not yet been implemented for a function/program or the test engineer does not want to introduce implementation bias to the LLM when generating test cases for a function/program. We denote such a setting as “Placeholder” in this paper. We also let $N = 100$, as in the oracle setting. ## 5 Test Case Generation In this section, we introduce how test cases can be generated, when the implementation of a function/program is given as described in Section 4. In this paper, a desired test case is a pair of input and its expected output for the function/program defined in the context. As an example, Figure 1 demonstrates some test cases for the programming problem of checking whether the two words satisfy a specific rotation pattern. To generate test cases, we use the LLMs introduced in Section 3. We wrote extra prompts to instruct the LLMs to generate three test cases for each given code which include docstrings that describe the purpose of this function, as depicted in Figure 1. Our instruction commands the LLMs (1) to “check the correctness of this function with three test” and (2) to start writing test code with an “assert” statement and the tested function, which specifies the format of the test cases as input-output pairs that can be parsed. For instance, given the example in Figure 1, the extra prompt should be “# Check the correctness of this function with three test cases \n assert cycpattern_check”. We then concatenate the extra prompt with the code and feed the concatenation into each LLM, for extracting test cases from the model output. The LLM will try to complete the given input by generating one or more “assert” statement(s), and we split the generation results into sub-strings, with “assert” as the separator. Each sub-string is then considered as a test statement, and we only take the first three statements if there exist more than three statements, as has been introduced in Section 2. Such a split can be considered as an effective post-processing operation which largely improves the quality of the generated test code, considering that some non-sense code pieces may be generated in the output of the LLMs. When using HumanEval+ and MBPP, we try removing test cases in the docstrings of the function, if there exist any, just to get rid of the broad hints from the docstrings (Chen et al., 2023). The temperature for generating test cases is kept as 0.2. Once obtained, the generated test cases are then compiled, and evaluated for their correctness and diversity to report the pass rate $P'$ and the coverage rate $C$. When calculating, for each problem and every set of completions generated, we create a temporary folder. ## 6 Main Results for Test Case Generation The experiment results of small and large LLMs on HumanEval+ can be found in Table 2 and Table 3 respectively. Table 4 shows the results on MBPP. There are several takeaways from these tables. - **First**, the test cases generated by LLMs can show a descent pass rate, and this pass rate is even higher than the code pass rate on HumanEval+, which holds for both large and small | Model | Size | Oracle | Self-generated | All-generated | Placeholder | |------------|--------|--------------|----------------|---------------|-------------| | InCoder | 1.3B | 21.31% (61.43%) | 23.37% (59.36%) | 22.72% (61.10%) | 25.19% (62.75%) | | CodeGen2 | 1B | 31.63% (71.55%) | 30.62% (69.38%) | 30.93% (69.70%) | 30.69% (69.00%) | | CodeT5+ | 770M | 35.43% (71.45%) | 32.34% (70.45%) | 31.49% (69.75%) | 32.67% (70.67%) | | SantaCoder | 1.1B | 30.97% (71.46%) | 30.43% (70.81%) | 30.13% (70.55%) | 30.78% (71.24%) | Table 2: The pass rates (and coverage rate) of the test cases generated on HumanEval+ in different settings for LLMs with around 1 billion parameters. | Model | Size | Oracle | Self-generated | All-generated | Placeholder | |----------------|------|------------|----------------|---------------|-------------| | CodeGen-Multi | 16B | 43.88% (67.91%) | 41.85% (69.30%) | 40.38% (66.97%) | 39.74% (68.28%) | | CodeGen2 | 16B | 46.34% (73.07%) | 45.44% (73.17%) | 42.00% (72.45%) | 42.69% (72.86%) | | CodeGen-Mono | 16B | 49.03% (74.82%) | 45.73% (73.74%) | 43.91% (73.66%) | 44.92% (73.63%) | | StarCoder | 15B | 55.07% (76.02%) | 52.52% (72.45%) | 48.20% (72.30%) | 50.58% (74.52%) | | CodeGeeX2 | 6B | 57.03% (74.42%) | 53.16% (73.55%) | 49.28% (70.32%) | 51.78% (73.08%) | | WizardCoder | 15B | 53.89% (77.87%) | 55.47% (76.07%) | 48.02% (75.27%) | 49.89% (75.12%) | | GPT-3.5-turbo | - | 71.03% (77.85%) | 72.45% (77.24%) | 59.24% (74.99%) | 66.28% (74.03%) | Table 3: The pass rates (and coverage rate) of the test cases generated on HumanEval+ in different settings for LLMs whose parameters are obviously more than 1 billion. Figure 2: The correlation between code past rate and test pass rate in the “Oracle” setting. Figure 3: How the correctness of the test cases changes with their order when being generated. LLMs. Such a result is consistent with intuitions from previous work which rejects code that cannot pass the generated tests to improve the quality of program synthesis. • Second, the correctness of the generated test cases is positively correlated with the LLM’s ability of generating code (see Figure 2, where each red cross represents the performance of a model), which means an LLM showing the state-of-the-art program synthesis performance is possibly also the state-of-the-art LLM for program testing. As shown in Tables 2 and 3, GPT-3.5-turbo, which synthesizes programs/code with the highest correctness, provides test cases with the highest pass rate (71.03%) on HumanEval+. For an LLM, the more accurate it is capable of synthesizing programs/code on a dataset, the more powerful testing ability will probably be exhibited on the same dataset. There also exist a few exceptions, e.g., SantaCoder (1.1B) outperforms CodeT5+ (770M) and CodeGen2 (1B) in generating code, but it shows inferior performance in program testing on HumanEval+. By carefully examining the test cases yielded by SantaCoder on HumanEval+, we found that it tends to generate more complex and longer test cases than CodeT5+ for several problems on HumanEval+, which are often more desirable in program testing. This is also why the SantaCoder test cases show higher coverage rates in Table 2. To be concrete, in Problem 131 in HumanEval+, where the program is required to return the product of all digits with an odd position in a positive integer \( n \) (which is the input), the test input provided by CodeT5+ tends to be small for this problem, e.g., \( n = 2 \), while the SantaCoder test cases tend to have more digits (e.g., \( n = 12358 \)), which is helpful in digging out hidden bugs. Yet, generating longer and more complex test cases is more challenging, and the correctness can be lower. • Third, as can be seen in Tables 3 and 4, generating test cases using large LLMs with their self-generated code (in the prompts) often leads to a higher level of correctness, compared with the placeholder results. This observation is in fact unsurprising, considering that generating code first and test case afterwards resembles the chain-of-thought prompting (Wei et al., 2022) (if adopting the placeholder is regarded as a plain prompting), which is beneficial to reasoning. Moreover, the self-generated performance of an LLM sometimes even outperforms its testing performance with an oracle, and we ascribe this to: 1) randomness in the style of the oracles which are few in number and/or 2) less distribution shift between self-generated code in prompt and the training code, for some powerful LLMs. | Model | Size | Oracle | Self-generated | All-generated | Placeholder | |---------------|--------|--------------|----------------|---------------|-------------| | InCoder | 1.3B | 21.56% (46.81%) | 17.98% (46.11%) | 19.53% (46.45%) | 22.58% (46.72%) | | CodeGen2 | 1B | 25.61% (54.26%) | 21.85% (53.09%) | 23.15% (50.43%) | 22.81% (52.11%) | | CodeT+ | 770M | 29.02% (56.86%) | 24.44% (52.31%) | 24.84% (53.20%) | 25.59% (55.81%) | | SantaCoder | 1.1B | 32.37% (55.68%) | 26.40% (52.38%) | 26.20% (52.83%) | 26.53% (53.86%) | | CodeGen-Multi | 16B | 41.32% (60.63%) | 35.96% (59.03%) | 34.17% (58.09%) | 34.84% (58.92%) | | CodeGen2 | 16B | 45.30% (62.15%) | 38.67% (60.16%) | 36.77% (58.59%) | 37.27% (59.16%) | | CodeGen-Mono | 16B | 50.24% (64.39%) | 43.94% (62.94%) | 39.55% (61.99%) | 42.41% (62.31%) | | StarCoder | 15B | 54.84% (65.10%) | 46.77% (63.60%) | 42.80% (61.95%) | 45.35% (62.66%) | | CodeGeeX2 | 6B | 52.45% (64.64%) | 44.52% (63.72%) | 41.72% (60.48%) | 43.86% (63.51%) | | WizardCoder | 15B | 57.85% (66.68%) | 46.56% (64.86%) | 41.62% (60.72%) | 47.45% (64.54%) | | GPT-3.5-turbo | - | 74.30% (66.19%) | 66.14% (65.30%) | 49.56% (62.95%) | 63.34% (64.72%) | Table 4: The pass rates (and coverage rate) of the test cases generated on MBPP. • **Fourth**, with only a few exception, test cases obtained using the oracle code exhibit slightly higher code coverage, while the coverage rate achieved in the other settings (i.e., the self-generated, all-generated, and the placeholder settings) is often slightly lower. The above four takeaway messages can all be inferred from Tables 2, 3, and 4. In addition to all these results, we conduct more experiments to achieve the following takeaway messages. • **Fifth**, by analyzing the relationship between the quality of code in prompts and the correctness of test, we found that correct code implementation in the prompt often leads to higher quality of test code generation than the case when some incorrect code is given. We conducted an experiments where we first select programming problems in HumanEval+, where the code pass rate of an LLM is neither 0% or 100%. Then we separate self-generated programs/code of the model into two groups, with one group only contains programs/code that are considered as correct and the other only contains incorrect programs/code. In Table 5, we compare the performance of using these two sorts of code in the prompt, for generating test cases using the same LLM. Apparently, the quality of test cases obtained with correct programs/code is obviously higher. We further evaluate the overall testing performance of LLMs with only correct self-generated programs/code, if there exists any, in their prompts. Unlike in Table 5, where we do not take problems that can be 100% or 0% solved, we take all given problems in this evaluation, except, for every problem, we eliminate all incorrect self-generated programs/code if there exist at least one correct implementation synthesized by the evaluated LLM. By doing so, we can observe substantially improved program testing ability on HumanEval+ (i.e., 74.95% for GPT-3.5-turbo, 56.87% for WizardCoder, 54.33% for CodeGeeX2, and 53.24% for StarCoder), comparing with the original self-generated results in Table 5. The same on MBPP. • **Sixth**, by conducting an additional experiment, we further compare the quality of test cases collected from different positions in the generation results. For every set of the three generated test cases, we analyze the relationship between their correctness and the order when they are generated. The results are illustrated in Figure 3. As can be seen in the figure, the first generated test case often shows the best correctness and the latterly generated ones are more incorrect. This may be due to the fact that the model tends to first generate content with a high level of confidence (which is also more likely to be correct). 7 Improving Program Synthesis Using the Generated Test Cases High quality test cases are not only desired in program analyses, but also helpful to program synthesis. Previous methods have successfully used generated test cases to improve the performance of LLMs in synthesizing programs/code. For instance, [Li et al. (2023a)] designed a special prompt which involves the test cases as an preliminary, if they are available, for generating programs/code. One step further, [Chen et al. (2023)] proposed CodeT, which leverages the LLM to obtain test cases first and tests all synthesized programs/code with these test cases by performing a dual execution agreement, and it picks the code in the largest consensus set (i.e., the consensus set with the most code implementations and test cases) as output to obtain state-of-the-arts program synthesis performance. We encourage interested reader to read the original paper. | Model | Size | w/ correct code | w/ incorrect code | #Problem | |---------------|------|-----------------|-------------------|----------| | InCoder | 1.3B | 28.55% | 27.39% | 27 | | CodeGen2 | 1B | 27.25% | 25.74% | 11 | | CodeT5+ | 770M | 40.19% | 36.78% | 27 | | SantaCoder | 1.1B | 37.45% | 34.08% | 24 | | CodeGen-Multi | 16B | 55.49% | 50.06% | 32 | | CodeGen2 | 16B | 43.56% | 39.31% | 29 | | CodeGen-Mono | 16B | 45.18% | 42.86% | 56 | | StarCoder | 15B | 58.16% | 57.08% | 68 | | CodeGeeX2 | 6B | 52.84% | 48.63% | 51 | | WizardCoder | 15B | 48.02% | 45.12% | 54 | | GPT-3.5-turbo | - | 75.39% | 68.52% | 126 | Table 5: With the correct (self-generated) code, the LLMs show stronger ability of generating correct test cases on HumanEval+ (evaluated only on those problems that can neither be 0% solved nor 100% solved), than in the case where incorrect self-generated code is given in the prompts. Since most LLMs cannot generate any correct code for many hard problems while they often generate incorrect code even for easy problems, the number of tested problems in this experiment increases with the power of the tested LLM, as shown in the rightmost column. In the previous section, we have obtained results about many intriguing properties of the program testing performance of LLMs for code. In this section, we would like to drive the readers to think whether it is possible to utilize these results to improve the program synthesis performance, considering that the test cases (hand-crafted and given or automatically generated in particular) are widely and successfully used in program synthesis. We shall demonstrate that, by utilizing takeaway messages in Section 6, the program synthesis performance of previous methods can be improved significantly. Taking CodeT as an example of the previous state-of-the-art, the method uses a placeholder to generate test cases and treats all the test cases as equally correct as a prior. However, as discussed in our third takeaway message, using self-generated code helps to achieve more powerful ability in generating correct test cases. Moreover, if multiple test cases are provided in a single run of generation given an LLM, the correctness of the test cases decreases with their generation order, as shown in our fifth point. Hence, to obtain superior program synthesis performance, we introduce two simple modifications to it: 1) we employ the “self-generated” setting instead of the “placeholder” setting for generating test cases, which means we utilized synthesize programs in prompts when generating test cases for each program, 2) we assign different weights to the generated test cases based on their order in each generation result, which means we used the rank of each generated test case to re-weight its contribution to the consensus set it belongs to. We test the effectiveness of using 1) the prompt which involves self-generated (SG) code as the test cases generated in this setting show higher correctness than the baseline placeholder setting and 2) the rank-based re-weighted (RW) test cases, in improving program synthesis performance on HumanEval+. Following Chen et al. [2023], we used a temperature of 0.8 to generate code and self-generated test cases. After obtaining the consensus set, we re-weight test case by $p^{i-1}$ with $i$ being its order in the model output, and we let $p = 0.8$. That is, instead of directly using their counting numbers, we use the sum of $p^{i-1}$ and the final score of a consensus set is then the sum of a) $\sum p^{i-1}$ and b) the number of code implementations in the consensus set, and code implementations in the consensus set with the highest score are considered as the best solutions. Table 6 shows the results. We compare CodeT with CodeT+SG, CodeT+RW, and CodeT+SG+RW. For CodeT, we follow their official implementation and generate $100 \times 5$ test cases for each problem. For fair comparison, we ensure that our solutions with SR and/or RW generate the same numbers of program implementations and test cases as CodeT does. Hence, for each problem in HumanEval+, we synthesize a program together with its 5 test cases for 100 times when SR and/or RW are incorporated, i.e., we have $i \in \{1, 2, 3, 4, 5\}$. It can be seen from the table that both SG and WR improves the program synthesis performance considerably on most LLMs, except for Incoder, CodeGen2-1B, CodeT5+, and SantaCoder for which the test cases generated in the placeholder setting show similar or even higher correctness than in the self-generated setting and SG fails with them. For some LLMs, SG is more powerful, while, on the other models including SantaCoder and StarCoder, RW is more powerful. By combining SG and RW, the program synthesis performance of most powerful LLMs in Table 6 improves, comparing to only using one of the two. On GPT-3.5-turbo and WizardCoder, which are the best two models in synthesizing programs on HumanEval+, we achieve +4.22% and +3.04% performance gains for CodeT, respectively, with SG & RW. | Model | Size | Baseline | CodeT | + SG | + RW | + SG & RW | |----------------|-------|----------|-------|------|------|-----------| | InCoder | 1.3B | 6.99% | 9.85% | 9.45%| 10.26%| 9.98% | | CodeGen2 | 1B | 9.19% | 15.15%| 14.89%| 15.67%| 15.35% | | CodeT5+ | 770M | 12.95% | 16.57%| 16.28%| 17.19%| 16.98% | | SantaCoder | 1.1B | 15.21% | 18.43%| 18.17%| 18.75%| 18.63% | | CodeGen-Multi | 16B | 15.35% | 24.50%| 25.71%| 25.72%| 26.95% | | CodeGen2 | 16B | 19.33% | 27.56%| 28.51%| 28.43%| 29.63% | | CodeGen-Mono | 16B | 26.15% | 35.63%| 36.69%| 36.63%| 37.95% | | StarCoder | 15B | 27.90% | 40.46%| 41.21%| 42.12%| 43.15% | | CodeGeeX2 | 6B | 29.97% | 44.16%| 45.23%| 44.92%| 46.32% | | WizardCoder | 15B | 46.23% | 58.41%| 60.13%| 59.60%| 61.45% | | GPT-3.5-turbo | - | 61.70% | 69.25%| 72.45%| 70.75%| 73.47% | Table 6: Program synthesis performance (Pass@1) of LLMs can be significantly improved by using our takeaway messages in Section 6. The experiment is on HumanEval+. 8 RELATED WORK Test case generation via program analysis. Generating reasonable test cases for analyzing programs is a long-standing problem in the software engineering community. Various program analysis techniques, e.g., fuzzing, have been developed for achieving this goal. AFL++ (Fioraldi et al., 2020) is the most popular tool which incorporates many techniques in this category. A major weakness of these techniques is understandability of the generated test cases. Test case generation via deep learning. The invention of transformer and self-supervised pre-training have brought a breakthrough to programming language processing and program testing (Fioraldi et al., 2020; Tufano et al., 2022; Dinella et al., 2022). After being trained in a self-supervised manner on a large and diverse code corpus, LLMs have demonstrated remarkable abilities in understanding and synthesizing programs. We have also witnessed the adaptation of pre-trained LLMs (e.g., ChatGPT) to fuzzing (Xia et al., 2023) very recently. Similarly, Lemieux et al. (2023) utilized Codex to provide example test cases for under-covered functions, which prevents the coverage improvements stall. Nevertheless, there still lack and require in-depth analyses and intensive comparisons of different LLMs in program testing, considering that powerful LLMs emerge continuously. For instance, the recent WizardCoder (Luo et al., 2023) exhibits an obvious program synthesis superiority over other contemporary open-source LLMs. In our study, we focus on the analyses and comparison of the LLMs in writing test code and generating test cases. Evaluation of Large Language Model. Recently, large language models (LLMs) has incited substantial interest in both academia and industry. In order to evaluate the capabilities of large language models, a variety of effort have been devoted from the perspectives of natural/programming language processing accuracy, robustness, ethics, biases, and trustworthiness, etc. For instance, PromptBench (Zhu et al., 2023) demonstrates that current LLMs are sensitive to adversarial prompts, and careful prompt engineering is necessary for achieving descent performance with them. Another example, DecodingTrust (Wang et al., 2023a), offers a multifaceted exploration of trustworthiness of the GPT models, especially GPT-3.5 and GPT-4. The evaluation expands beyond the typical trustworthiness concerns to include several new critical aspects. Agentbench (Liu et al., 2023b) evaluates LLM as agents on challenging tasks in interactive environments. Their experimental results show that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and their open-source competitors. 9 CONCLUSION In this paper, we have performed thorough analyses of recent LLMs (mostly LLMs for code) in testing programs/code. Through comprehensive experiments with 11 LLMs on programming benchmark datasets including HumanEval+ and MBPP (the sanitized version), we have uncovered a range of intriguing characteristics of these LLMs for program/code testing. We have illustrated how the program testing capabilities of these LLMs can be enhanced in comparing intensive empirical results in four different settings. Based on our findings, we are also capable of improving the performance of state-of-the-art LLMs in synthesizing programs/code with test cases of higher quality. As a preliminary research work, we believe our paper can provide new research insights and spark new ideas in program/code synthesis, test-case generation, and LLM understanding, and we look forward to future exploration in this direction in future work. REFERENCES Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, et al. Santacoder: don’t reach for the stars! arXiv preprint arXiv:2301.03988, 2023. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021. Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Ziqi Lin, Jian-Guang Lou, and Weizhu Chen. Codet: Code generation with generated tests. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=ktrw68Cmu9c. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. Elizabeth Dinella, Gabriel Ryan, Todd Mytkowicz, and Shuvendu K Lahiri. Toga: A neural method for test oracle generation. In Proceedings of the 44th International Conference on Software Engineering, pp. 2130–2141, 2022. Andrea Fioraldi, Dominik Maier, Heiko Eißfeldt, and Marc Heuse. {AFL++}: Combining incremental steps of fuzzing research. In 14th USENIX Workshop on Offensive Technologies (WOOT 20), 2020. Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Scott Yih, Luke Zettlemoyer, and Mike Lewis. Incoder: A generative model for code infilling and synthesis. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=hQwb-1BM6EL. Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020. Denis Kocetkov, Raymond Li, Loubna Ben allal, Jia LI, Chenghao Mou, Yacine Jernite, Margaret Mitchell, Carlos Muñoz Ferrandis, Sean Hughes, Thomas Wolf, Dzmitry Bahdanau, Leandro Von Werra, and Harm de Vries. The stack: 3 TB of permissively licensed source code. Transactions on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=pxpbTduEpD. Caroline Lemieux, Jeevana Priya Inala, Shuvendu K Lahiri, and Siddhartha Sen. Codamosa: Escaping coverage plateaus in test generation with pre-trained large language models. In International conference on software engineering (ICSE), 2023. Jia Li, Yunfei Zhao, Yongmin Li, Ge Li, and Zhi Jin. Towards enhancing in-context learning for code generation. arXiv preprint arXiv:2303.17780, 2023a. Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, et al. Starcoder: may the source be with you! arXiv preprint arXiv:2305.06161, 2023b. Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pp. 74–81, 2004. Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation. arXiv preprint arXiv:2305.01210, 2023a. Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, and Jie Tang. Agentbench: Evaluating llms as agents, 2023b.
t5LXyWbs5p
The experimental setting of unimodal vs multimodal is very confusing. The authors state that ExpEMG is a dataset of single-channel EMG recordings, how is the multimodal approach applied in this case? This same problem also applies to other datasets.
Frequency-Aware Masked Autoencoders for Multimodal Pretraining on Biosignals Anonymous authors Paper under double-blind review Abstract Leveraging multimodal information from biosignals is vital for building a comprehensive representation of people’s physical and mental states. However, multimodal biosignals often exhibit substantial distributional shifts between pretraining and inference datasets, stemming from changes in task specification or variations in modality compositions. To achieve effective pretraining in the presence of potential distributional shifts, we propose a frequency-aware masked autoencoder (bioFAME) that learns to parameterize the representation of biosignals in the frequency space. bioFAME incorporates a frequency-aware transformer, which leverages a fixed-size Fourier-based operator for global token mixing, independent of the length and sampling rate of inputs. To maintain the frequency components within each input channel, we further employ a frequency-maintain pretraining strategy that performs masked autoencoding in the latent space. The resulting architecture effectively utilizes multimodal information during pretraining, and can be seamlessly adapted to diverse tasks and modalities at test time, regardless of input size and order. We evaluated our approach on a diverse set of transfer experiments on unimodal time series, achieving an average of \( \uparrow 5.5\% \) improvement in classification accuracy over the previous state-of-the-art. Furthermore, we demonstrated that our architecture is robust in modality mismatch scenarios, including unpredicted modality dropout or substitution, proving its practical utility in real-world applications. Code will be available soon. 1 Introduction Physical and mental states of an individual are manifested by a variety of physiological responses or biosignals. For example, electroencephalography (EEG) can decode human emotions by monitoring their brain activities (Liu et al., 2010), electromyography (EMG) can detect facial expressions such as smiling by recording muscle contractions (Canento et al., 2011), and a combination of these modalities can help decode a person’s affective states. The effective use of multimodal information can not only build better and more resilient representations of the human body and mental states (Bachmann et al., 2022; Smith & Gasser, 2005; De Sa & Ballard, 1998), but also help researchers understand how each biosignal contributes to each physiological state and how their information overlaps (Bird et al., 2020). Recently, in language-vision domains, large-scale multimodal pretraining has demonstrated remarkable generalization and zero-shot capabilities (Huang et al., 2021; Bachmann et al., 2022; Radford et al., 2021), outperforming small-scale models that are trained on specific downstream tasks (Kirkpatrick et al., 2017; Radford et al., 2019). In light of these advancements, we investigate whether similar pretraining can be applied to the biosignal domain. However, performing multimodal pretraining on biosignals is particularly challenging due to the significant distributional shifts between the pretraining and downstream datasets. This challenge can be categorized in two ways: (i) For biosignals, there are substantial distributional shifts within each modality, wherein data varies across tasks, subjects, and even recording sessions within subjects due to slight changes in sensor placement and recording conditions (Cheng et al., 2020). Additionally, (ii) multimodal biosignals might encounter strong distributional shifts across modalities, meaning that the connection between different modalities can be altered. These crossmodal domain shifts can arise from unimodal shifts, as a change in a single modality can disrupt its relationship to a different modality. Moreover, multimodal biosignals often face modality mismatch scenarios, where modalities may be unavailable at Figure 1: Motivation of our approach. (A) In multimodal biosignal systems, there exists substantial distributional shifts between the pretraining and inference datasets. (B) The distributional shifts often cause the shifts of representation in time-space, which can affect the model’s generalization ability within modality and across modalities. (C) In the meantime, the representation in frequency-space typically would contain similar frequency components within modality, leading to more stable combinations in multimodal scenarios. test time, and thus are removed or replaced with new modalities that provide relevant information to the detected physiological response (McKinzie et al., 2023). Addressing these distributional shifts is crucial to effectively leverage multimodal pretraining on biosignals. In this work, we propose to incorporate frequency information in time series to mitigate distributional shifts and enable multimodal pretraining on biosignals. Frequency-domain analysis is advantageous for biosignals not only due to its invariance to common causes of distributional shifts such as temporal shifts and scaling, but also because the extracted frequency components are characteristic representations for physiological activities (see Figure 1). While previous works have shown the effectiveness of using frequency domain information to address generalization issues, they have either relied on encoders from both the time and frequency domains (Zhang et al., 2022b), or complicated sampling and combining modules (Zhou et al., 2022b) to utilize the frequency information. Here, we propose a simple, yet effective, multi-head frequency filter layer with fixed-size Fourier-based operator that directly parameterizes the representation of biosignals in the frequency space. The proposed layer can be easily incorporated into the transformer, giving a frequency-aware (FA) encoder that is both expressive and computationally efficient. Furthermore, to extend the frequency awareness into a multimodal pretraining setting, we couple the FA encoder with a frequency-maintain (FM) pretraining strategy. To prevent the statistical consistency within the data from being disrupted by conventional masked autoencoding strategies (Ryali et al., 2023), our method performs masked autoencoding in the latent space to maintain the frequency awareness during reconstruction. Coupled with a channel-independent design (Nie et al., 2022; Liu et al., 2022b), our model presents a pure reconstruction-based multimodal pretraining architecture that can effectively combine and utilize information across modalities, with robustness towards distributional shifts within and across modalities. To systematically evaluate our proposed approach, we first examine the transferability of our architecture on a publicly available one-to-many transfer learning benchmark (Zhang et al., 2022b). Our architecture achieves state-of-the-art performance, giving an average of 75.5% improvements in classification accuracy over the previous state-of-the-art, showing consistency across datasets of different input lengths, sampling rates, and diverse sources of modalities. Next, we demonstrate that our architecture is robust to a variety of modality mismatch scenarios commonly encountered in real-world cases, showing that our architecture can effectively integrate and leverage information across multiple modalities during pretraining. We summarize our main contributions as follows: • We propose bioFAME, a frequency-aware masked autoencoder for biosignals comprising: (i) a frequency-aware (FA) transformer encoder that can learn biosignals in a robust and computationally efficient way; (ii) a frequency-maintain (FM) pretraining strategy that retains the frequency awareness during reconstruction. • By constructing a fixed-size Fourier-based operator in the architecture, bioFAME can be pretrained on multimodal biosignals and adapted to new modalities of varying lengths and frequency components, exhibiting resilience to distributional shifts even when the modalities differ between training and testing. • bioFAME achieves consistently robust performance on a diverse set of transfer experiments, outperforming the previous state-of-the-art by an average improvement of \(+5.5\%\), demonstrating how utilizing multimodal information at the pretraining stage can benefit the generalization ability of the model. 2 BACKGROUND Multimodal Pretraining Methods Pretraining large-scale models that can effectively use multimodal information has gathered a lot of research attention due to its strong capability of generalization (Huang et al., 2021; Liang et al., 2022; Reed et al., 2022; Chai & Wang, 2022). Multimodal pretraining methods can be roughly categorized as (i) those that train separate encoders for each modality, as seen with contrastive methods that design novel objectives to align or fuse representations from different modalities (Li et al., 2021a; Radford et al., 2021; Jia et al., 2021), and (ii) those that design one unified architecture for many modalities, with completely shared encoders per-modality or a few layers shared for decoding (Reed et al., 2022; Akbar et al., 2021; Wang et al., 2022). The benefit of using one unified architecture is that we can build a joint representation space that connects different modalities, as well as share weights to reduce additional computational overhead (Bachmann et al., 2022; Lu et al., 2022). Inspired by the latter, our work aims to train a single unified architecture for multimodal biosignals with an effective frequency-awareness design. Pretraining on Biosignals and Time Series Biosignals are multivariate time series that capture various physiological processes within the human body (Giannakakis et al., 2019; Cheng et al., 2020). While biosignals are crucial for diverse applications such as human-computer interaction, acquiring an ample amount of labeled biosignals is a labor-intensive process that requires the involvement of domain experts (Ericsson et al., 2022). To alleviate the need for labeled data, researchers proposed various self-supervised methods to pretrain the model with large-scale unlabeled datasets. This includes (i) contrastive methods that build latent representation based on similarity across samples of different augmentation (Cheng et al., 2020; Kiyasseh et al., 2021; Zhang et al., 2022b), (ii) reconstruction-based methods that perform either feature reconstruction or data reconstruction (Kostas et al., 2021; Chien et al., 2022), or (iii) a hybrid of both (Dong et al., 2023). While previous works demonstrate that pretraining on large-scale data can benefit downstream task performance, however, most of the existing works only explored unimodal pretraining without investigating how to effectively utilize the multimodal information present at training time. Existing work even shows that pretraining on multimodal information could cause performance degradation due to the large variation across modalities (Zhang et al., 2022b). To the best of our knowledge, this is the first work that explores how to effectively perform multimodal pretraining on biosignals that gives robust performance towards distributional shifts within and across modalities. 3 MOTIVATION OF OUR APPROACH Parameterizing representations in the frequency space is shown to be effective in many domains. Frequency-based approaches are particularly effective in solving partial differential equations and modeling long sequences (Li et al., 2020b; Gu et al., 2021; Li et al., 2022b; Zhou et al., 2022a), as it can effectively capture long-range dependencies. Frequency-aware approaches are also widely used in computer vision, as it can improve image fidelity and can effectively mix tokens when used in the transformer architecture (Rao et al., 2021; Guibas et al., 2021; Xie et al., 2022; Liu et al., 2022a; Li et al., 2022a). Akin to physiological signal processing, frequency-based approaches are employed to effectively extract discriminative patterns within sensory signals (Yao et al., 2019; Li et al., 2021b). The robustness of frequency-based operations can be partially attributed to the connection between Fourier transform and global circular convolution (Zhu et al., 2016; Li et al., 2020a). Recently, many works suggest that the periodic oscillations and analogous patterns in the frequency space exhibit rich information for electrophysiological signals (Donoghue et al., 2020; Bird et al., 2020; Subha et al., 2010; Demanuele et al., 2007). Thus, several frequency-aware approaches are proposed to study biosignals. For example, Zhang et al. (2022b) used the consistency between time and frequency spaces to guide the learning on biosignals, demonstrating improved transferability and generalizability on downstream tasks. Other works perform cross-domain reconstruction across the time and spectral domains (Zhang et al., 2022a; Yang & Hong, 2022). Figure 2: Overview. (A) Previous approaches perform masking in the time domain, which causes shifts in the frequency components. Also, the encoders are unaware of the frequency information in time series. (B) To address the issues, we propose bioFAME, which (i) builds frequency awareness by directly learning frequency filters in the representation space, and (ii) performs masked autoencoding in the latent space to maintain frequency information during pretraining. (C) We implement bioFAME in the multimodal pretraining scheme, where the frequency-aware encoder (FA-Enc(·)) processes signals in a channel-independent manner, and extracts representations with multi-head filter layer with fixed-size Fourier operators. The frequency-maintain pretraining strategy further performs masked autoencoding in the latent space with separate reconstruction to guide the effective mixing of multimodal information. Contrary to prior studies, bioFAME emphasizes transferability and efficient adaptation to downstream tasks across many physiological modalities, by leveraging frequency-space information during pretraining on multimodal data to forge a universal representation of biosignals. We design novel mechanism and architecture to build a fully transferable and computation-efficient approach for frequency-aware representation extraction, setting bioFAME apart from conventional methods that are constrained by frequency-space encoders or decoding components tailored to specific input sizes [Wu et al., 2022]. These conventional methods often struggle with modality transfer due to varying frequency components and introduce unnecessary computational burdens and overparameterization. Our approach, in contrast, ensures flexibility and efficiency, free from such limitations. 4 METHOD Preliminaries: Discrete Fourier Transform (DFT) for Token Mixing DFT is widely used in traditional methods for processing biosignals and images [Pitas, 2000]. For a time space representation \( x \in \mathbb{R}^N \) with \( N \) elements \( x_n, n \in [0, N - 1] \), its corresponding frequency space representation \( z \in \mathbb{C}^N \) with elements \( z_k \) is produced by DFT (\( F(x) = z \)), which can be inversed through the Inverse Discrete Fourier Transform (IDFT) (\( F^{-1}(z) = x \)) as below: \[ DFT: z_k = \sum_{n=0}^{N-1} x_n e^{-i(2\pi/N)kn}, \quad IDFT: x_n = \frac{1}{N} \sum_{k=0}^{N-1} z_k e^{i(2\pi/N)kn}, \] where \( i \) is the imaginary unit. The computational complexity of DFT can be reduced from quadratic to \( O(N \log N) \) when leveraging the fast Fourier transform (FFT) algorithm [Brigham, 1988]. Consider a sequence \( X = [x_1, ..., x_N]^T \in \mathbb{R}^{N \times D} \) of \( N \) tokens of \( D \)-dimensions, transformers aim to learn the interactions across tokens, typically through the self attention operation. Recently, mixing tokens with frequency-based operations through DFT and IDFT is shown to be a computationally efficient alternative [Kao et al., 2021; Gübas et al., 2021], as it considers global-wise information mixing. The token mixing process is theoretically grounded by the Fourier Neural Operators [Li et al., 2020b], which is often implemented in its discrete form (denote as \( K \)) as such: \[ (K(X))(x_i) = F^{-1}(R \cdot F(X))(x_i), \forall i \in [1, N] \] Ideally, \( R \) should be the Fourier transform of a periodic function which admits a Fourier series expansion. For the sake of simplicity, it is often implemented as learnable weights of shape \( \mathbb{C}^{N \times D} \). 4.1 Frequency-aware Transformer with Multi-head Frequency Filters In this work, we seek to understand two questions: (i) If parameterizing biosignals in the frequency space would provide better empirical performance, as frequency information is shown to be vital for many physiological activities; (ii) How to design a frequency-aware architecture that is transferrable and generalizable across different types of biosignals with varying input lengths and sampling rates. To address those two questions, we propose a multi-head frequency filter layer to build a frequency-aware transformer encoder FA-Enc(·). Multi-head Frequency Filter Layer We propose to manipulate the frequency representation with a multi-head frequency filters $K \in \mathbb{C}^{H \times D}$, where $H$ is the total number of heads. Given a sequence of tokens $X \in \mathbb{R}^{N \times D}$, we first perform DFT along the sequence dimension to obtain its representation in the frequency space as $Z \in \mathbb{C}^{N \times D}$. To obtain the manipulated features in frequency space $\tilde{Z} \in \mathbb{C}^{N \times D}$, we first compute queries $Q = ZW$, where $W \in \mathbb{R}^{D \times H}$ is a learnable matrix that is used to combine processed information across different filters. The resulting queries are used to re-weight the kernels to obtain $\tilde{Z}$ through the below operations: $$\tilde{Z} = Z \odot (QK) = Z \odot (ZWK)$$ where $\odot$ is the Hadamard product. We show in Appendix C that the operation is equivalent to a weighted summation between each modulated frequency representation matrix, where the weights are self-generated through the queries. We note that our proposed operation, different from [Rao et al., 2021] [Guibas et al., 2021], is applicable on time series with dramatic changes in input lengths and sampling rates, as we use a flexible fixed-sized multi-head filters $K$ that enables the transferability of the model. Intuitively, the querying process has similarity to hypernetworks [David et al., 2016], which generates weights based on data itself to fully exploit the structure of the data. Having successfully incorporated a fix-sized multi-head filter $K$ into the frequency space, we further explored to build nonlinearity into the operation through an alternative maxpooling operation $\tilde{Z} = \text{MaxPool}(Z, K)$: $$\tilde{Z}[i,j] = \max_k |Z[i,j]K[k,j]|$$ where the max-pooling is performed based on the absolute value of the complex features. The resulting modulated frequency representation $\tilde{Z}$ is later recovered in time space through $\tilde{X} = \mathcal{F}^{-1}(\tilde{Z})$ with IDFT (see Figure 2.C)). We denote the whole process as Freq-L(·), which is computationally efficient, transferrable across different input lengths and sampling rates, and can be easily implemented in a few lines of code. Add Freq-L(·) into the Transformer The transformer architecture has revolutionized many domains, including natural language processing [Devlin et al., 2018], computer vision [Dosovitskiy et al., 2020], and recently time series processing [Nie et al., 2022]. Following [Nie et al., 2022], we first patchify the biosignals by dividing them into chunks, compute representations for each patch, and then feed the resulting patches into a transformer. Specifically, for a signal $s \in \mathbb{R}^L$ where $L$ is the total length of the sequence, we divide them into sequences of $S = [s_1, ..., s_N]$, where each patch $s_i \in \mathbb{R}^P$ has a size of $P$. An initial MLP is used to compute representation $x_i = \text{MLP}(s_i) \in \mathbb{R}^D$, and the sequence is later stacked into $X_0 \in \mathbb{R}^{N \times D}$. We replace the multi-head self-attention with our proposed multi-head frequency filter layer Freq-L(·) to mix the information across the sequence of tokens, which gives the FA transformer encoder layer as below: $$X_{\ell+1} = X_\ell + \text{Freq-L}(X_\ell) + \text{FF}(X_\ell + \text{Freq-L}(X_\ell)), \ell = \{0, \ldots, L - 1\}$$ where the representation is passed into the proposed Freq-L(·) layer and projection layers FF(·) with residual connections, as shown in Figure 2.C). 4.2 Frequency-maintain Pretraining with Latent Masking and Channel Independence Masked Autoencoding in the Latent Space Masked autoencoder (MAE) is a self-supervised pretraining framework, which masks out input patches and predicts the missing patches using the rest present patches. The architecture typically contains an transformer encoder that processes non-masked patches, followed by a decoder, usually a lightweight transformer, that reconstructs the original patches (He et al., 2022). To preserve the frequency information while being able to perform pretraining based on the masked autoencoding strategy, we perform masked autoencoding in the latent space. Specifically, denote our frequency-aware transformer encoder as FA-Enc(·), full sequence of biosignals $S$ is learnt through FA-Enc(·) to obtain $X_L = [x_1^L, x_2^L, ..., x_K^L]$. We sample a random set of patches based on a fixed masking ratio without replacement, and then process the resulting sequence with a lightweight transformer (second) encoder. We later pad the masked patches with mask tokens, and pass the resulting sequence into a lightweight transformer decoder to reconstruct the original signal, where the $i$-th reconstructed patch corresponds to $s_i$. Denote the masked autoencoder as MAE(·), bioFAME aims to optimize the below objective: $$\mathcal{L} = \frac{1}{|\Omega|} \sum_{i \in \Omega} l(s_i, \text{MAE(FA-Enc}(S))[i])$$ where $i$ is the token index, $\Omega$ is the set of masked tokens, and $l$ is an error term which is set as mean squared error (MSE) in this work. We show in Section 5 that the performance is robust if we remove MAE(·) and only keep FA-Enc(·) at test time. We note that this is the first work that finds using the masked autoencoding objective itself, without any contrastive terms, is effective on biosignals (Zhang et al., 2022b). Channel and Modality Independence Biosignals are multivariate time series that often face channel-wise and modality-wise mismatch at test time. To obtain robust transfer performance, we follow previous works to use channel-independent design before the second encoder to model multimodal biosignals (Liu et al., 2022b; Nie et al., 2022). Given a multi-channel biosignal $[S_1, S_2, ..., S_C]$, where $C$ denotes the total amount of channels. We perform the channel independence learning such that each $S_\xi$ are passed into FA-Enc(·) and MAE(·) as below: $$\mathcal{L} = \frac{1}{|\Omega|} \sum_{i \in \Omega} l(s_i, \text{MAE([FA-Enc}(S_1), ..., \text{FA-Enc}(S_C)))[i])$$ where $\Omega$ is the union of masked tokens for each channels, which is independently determined based on a fixed masking ratio for each channel. The parameter weights of the frequency-aware transformer encoder FA-Enc(·) are shared across channels, creating representations that are fed into the MAE(·), which combines information from different pretraining modalities. By combining the channel independence design into our multimodal masked autoencoding objective, our architecture can process input signals of any channel size and order, making it robust to multimodal distributional shifts when modalities are unavailable at test time. 5 EXPERIMENTS 5.1 Transfer experiments on unimodal time series Datasets We first evaluate the model’s generalization ability by transferring it on a diverse set of unimodal time series downstream tasks, following Zhang et al. (2022b). The transfer experiments include a set of four downstream tasks: Epilepsy (Andrzejak et al., 2001) (EEG measurement of disordered brain activity, sampling rate 174Hz with length 178); SleepEOG (Kemp et al., 2000) (EOG measurement of each sleep stage, sampling rate 100Hz with length 3000); ExpEMG (Goldberger et al., 2000) (EMG measurement of muscular disorders, sampling rate 4000Hz with length 1500); FD-B (Lessmeier et al., 2016) (Electromechanical measurement of motor disorder, sampling rate 64000Hz with length 5120). We performed data pre-processing following the same protocol and data split as in Zhang et al. (2022b), more details are in Appendix B.1. For model pretraining, we used the SleepEDF dataset (Kemp et al., 2000) as in Eldele et al. (2021; Zhang et al., 2022b), where the single-channel EEG (channel Fpz-Cz) is commonly used for unimodal pretraining. In this work, we also used an additional EEG channel (Pz-Oz) and an additional modality (EOG) from SleepEDF to perform multimodal pretraining with the same train/test split as in Eldele et al. (2021). ### I. Generalization with modality or task association. | Models | Epilepsy (EEG) | SleepEOG | |-----------------|----------------|-----------| | | Accuracy | Precision | Recall | F1 | Accuracy | Precision | Recall | F1 | | TS-SD | 80.18 | 76.47 | 89.52 | 77.67| 48.90 | 28.59 | 25.43 | 23.68| | Mixing-up | 80.21 | 40.11 | 50.00 | 44.51| - | - | - | - | | TS2vec | 93.95 | 90.59 | 90.39 | 90.45| 67.90 | 58.23 | 62.15 | 59.28| | CLOCS | 95.07 | 93.01 | 91.27 | 92.06| 66.86 | 56.67 | 58.99 | 57.34| | TS-TCC | 92.53 | 94.51 | 81.81 | 86.33| 69.65 | 61.56 | 61.49 | 61.16| | TF-C | 94.95 | 94.56 | 89.08 | 91.49| 69.58 | 62.04 | 68.05 | 64.15| | PatchTST | 95.01 | 91.66 | 92.96 | 92.27| 68.00 | 61.20 | 68.28 | 63.26| | bioFAME (scratch) | 90.41 | 84.64 | 86.29 | 85.33| 68.29 | 60.03 | 66.10 | 61.81| | bioFAME (unimodal) | 95.51 | 94.02 | 91.57 | 92.72| 70.03 | 63.37 | 68.00 | 65.05| | bioFAME (multimodal) | 95.71 | 93.57 | 92.82 | 93.18| 71.55 | 64.80 | 68.70 | 66.62| | Δ(uni, multi) | ↑0.20 | ↓0.45 | ↑1.25 | ↑0.46| ↑1.52 | ↑1.43 | ↑0.70 | ↑1.57| ### II. Generalization without explicit association. | Models | ExpEMG | FD-B (Electromechanics) | |-----------------|--------|-------------------------| | | Accuracy | Precision | Recall | F1 | Accuracy | Precision | Recall | F1 | | TS-SD | 46.06 | 15.45 | 33.33 | 21.11| 55.66 | 57.10 | 60.54 | 57.03| | Mixing-up | 30.24 | 10.99 | 25.83 | 15.41| 67.89 | 71.46 | 76.13 | 72.73| | TS2vec | 78.54 | 80.40 | 67.85 | 67.66| 47.90 | 43.39 | 48.42 | 43.89| | CLOCS | 69.85 | 53.06 | 53.54 | 51.39| 49.27 | 48.24 | 58.73 | 47.46| | TS-TCC | 78.89 | 58.51 | 63.10 | 59.04| 54.99 | 52.79 | 63.96 | 54.18| | TF-C | 81.71 | 72.65 | 81.59 | 76.83| 69.38 | 75.59 | 72.02 | 74.87| | PatchTST | 92.68 | 90.87 | 94.51 | 92.07| 67.03 | 71.96 | 75.57 | 70.09| | bioFAME (scratch) | 93.17 | 88.58 | 94.10 | 89.97| 67.92 | 76.45 | 76.51 | 76.20| | bioFAME (unimodal) | 98.05 | 97.07 | 96.63 | 96.40| 76.58 | 83.28 | 82.85 | 82.63| | bioFAME (multimodal) | 98.54 | 96.67 | 98.95 | 97.64| 78.18 | 84.99 | 84.01 | 83.75| | Δ(uni, multi) | ↑0.49 | ↓0.40 | ↑2.32 | ↑1.24| ↑1.60 | ↑1.71 | ↑1.16 | ↑1.12| Table 1: Transfer experiments on unimodal time series. All benchmark models are pretrained on the same single-lead EEG. All variants of our model based on the same architecture, where bioFAME (scratch) is trained from scratch, bioFAME (unimodal) follows the same pretraining as baselines, and bioFAME (multimodal) is pretrained on the multimodal version of the data. Model standard deviation are in Appendix A.3. ### Experimental Details For bioFAME, we used a 4-layer encoder, 8-head filter with 64 dimensions. The model was trained using an Adam optimizer with $\beta_1 = 0.9$, $\beta_2 = 0.99$, and a learning rate of 0.001. We performed a grid search based on the validation set to select the model hyperparameters (see Appendix B.4). Following prior works, we performed full model fine-tuning on all tasks (see details in Appendix B.2). In contrast to state-of-the-art contrastive architectures (Eldele et al., 2021; Zhang et al., 2022b), we did not apply data augmentation in our architecture as we found there was minimal impact on performance. We repeated experiments with five random seeds for major results, and three random seeds for ablation experiments (see model variation in Appendix A.3). To benchmark our method, we selected an extensive set of existing state-of-the-art models, including temporal-spatial methods (Shi et al., 2021; Yue et al., 2022), contrastive methods (Kiyasseh et al., 2021; Eldele et al., 2021), transformers and frequency-aware approaches (Nie et al., 2022; Zhang et al., 2022b). All benchmark models were pretrained on unimodal EEG under the same data split, providing a conclusive list of models for fair comparison. ### Pretraining on Unimodality Following previous works (Zhang et al., 2022b), we first performed pretraining on a single-channel EEG from the SleepEDF dataset, and then fine-tuning on a small amount of data from the downstream tasks. The performance of our proposed architecture is shown in Table 1. We show that with the same unimodal pretraining setup on single-channel EEG, our model consistently outperforms state-of-the-art benchmarks in most experiments, giving ↑4.2% improvements in accuracy. These results demonstrate that bioFAME is effective in terms of transfer on different tasks, with robustness to domain shifts across tasks, subjects, sampling rate, and sensors. Surprisingly, our architecture, without any pretraining (scratch), also provides robust performance on many datasets, different from previously reported results (Zhang et al., 2022b). This further demonstrates the robustness of our proposed architecture. Extending Pretraining to Multimodality While the Fpz-Cz EEG channel is shown to be the most informative channel for the pretraining task and typically provides robust prediction performance on its own (Supratak et al., 2017), in this work, we explore whether using additional multimodal information from the same task can further boost the pretraining performance. As shown in Table 1, for bioFAME, including multimodal information during pretraining provides better results than unimodal pretraining in general, consistently outperforming unimodal pretraining. Training on multimodal data also improves the model’s stability by giving a lower standard deviation, as shown in Appendix B.4. Note that in previous work (Zhang et al., 2022b), including multimodal information hurt performance rather than helped. This suggests that bioFAME can effectively utilize and combine information across modalities, resulting in better performance on downstream tasks. We hypothesize that pretraining on multiple modalities exposes the model to a more diverse range of frequency components, improving the model’s few-shot generalization. Ablations Experiments on Transferability We performed a set of ablation experiments to understand what makes bioFAME robust under the transfer experiments setting (more in Appendix A.1). In Table 2, we first studied the effect of the frequency-aware (FA) and frequency-maintain (FM) modules by either replacing the FA module with a self-attention transformer; or by replacing the FM module with a normal masking procedure. We found both approaches, when applied independently, improve the performance of a baseline variant by a significant margin (∼3%). Combining both modules gives the best performance, further boosting the effect of each individual component (∼5%). We also tested whether it is possible to discard the second encoder at test time, which would indicate whether or not the FA encoder plays a major role in learning. Interestingly, we show that discarding the second encoder at test time gives almost identical performance in the unimodal setting. However, when multimodal information is used for pretraining, discarding the second encoder would give a performance that is lower than the unimodal result, while keeping the second encoder increases the unimodal performance by ∼1% instead (see Table 3). We hypothesize that it is beneficial to retain the second encoder at test time under the multimodal setting because it is responsible for merging the information present across the multimodal data. Finally, in Table 4, we investigate how different patch sizes and masking ratios affect the performance of our model. We show that bioFAME gives stable performance when the patch size is relatively small, giving robust performance under a range of masking ratios. 5.2 Multi-modal Evaluations and Visualizations Datasets and Experimental Details After verifying the model’s generalization ability on transfer tasks, we investigated how well the model performs when applied to real-world cases in which multimodal information is available at test time. To understand this, we systematically studied different combinations of the EEG Fpz-Cz, EEG Pz-Oz, EOG, EMG, and the respiration channels of the SleepEDF dataset (Kemp et al., 2000), which are simultaneously recorded. We followed the same train/val/test split as in Eldele et al. (2021) while attaching the multimodal information instead of using only the unimodal information. We utilized the same model setup as in Section 5.1, aside from that we follow Section 4.2 to expand the training and testing under multimodal designs with weight sharing and channel independence. We also implemented two variants of multimodal latent expansion methods as in Appendix C. Robustness for Modality Mismatch Scenarios We consider two modality mismatch scenarios as shown in Figure 3(A): (i) Modality substitution, where one modality is replaced by another modality; and (ii) Modality dropout, where only a subset of modalities is present at test time. We show the model’s performance with modality substitution in Figure 3(B), where the model is pretrained Figure 3: Multimodal evaluation results. (A) Two modality mismatch scenarios are considered: Modality substitution and modality dropout. (B) When a modality is swapped with another available one, or (C) when modalities are dropped out at test time, our model gives lower performance degradation when comparing to a robust baseline. (D) By visualizing the attention weights across modalities, we can understand how modalities are associated with each other. with \{ EEG Fpz-Cz; EOG; EMG \}. Each of the pretraining modality is replaced with another channel to examine the performance degradation (more details in Appendix B.3). Our model gives better performance than the robust baseline PatchTST [Nie et al., 2022], exhibiting less performance degradation. In terms of modality dropout, we pretrained the model with \{ EEG Fpz-Cz; EEG Pz-Oz; EOG; EMG \}, and we dropped an increasing amount of modalities till there is only one modality left (see Figure 3(C)). We see that bioFAME is more resistant to unexpected modalities dropout in comparison to the baseline. Unlike many other baselines that contain spatial layers, bioFAME can be applied at test time even when there are unexpected amount of channels while exhibiting resilience towards modality mismatch scenarios. This study further demonstrated that bioFAME presents a robust model when used in real-world scenarios. Visualizing the Connections Across Modalities To understand how the information across different channels affects each other, we visualized the averaged attention matrix to examine the relationship across modalities. As shown in Figure 3(D), for each channel (row), the intensity of its attention or connection to the other channels can be visualized by the color (red means stronger connections). Interestingly, we notice that while each channel would rely on its own information the most, they tend to focus on the stronger modalities, which is the EEG Fpz-Cz channel in our case. Moreover, interesting asymmetry is observed for EOG-EMG, as EOG correlates more to the EMG while the opposite does not hold. We hypothesize that this is because facial movement would produce moving artifacts for EOG on the temple, while the opposite connection does not hold. This observation demonstrates that bioFAME can be used by researchers to further understand the information overlap across modalities [Bird et al., 2020]. 6 CONCLUSION In this work, we proposed a frequency-aware masked autoencoder that performs pretraining on multimodal biosignals. Our proposed method leverages a frequency-aware encoder with fixed-size Fourier-based operator to extract representation on biosignals, and uses a frequency-maintain pretraining module to perform pretraining. We performed extensive empirical experiments to show that (i) our model achieves state-of-the-art performance on a set of transfer experiments, where the models, both pretrained on unimodality and multimodality, can be adapted to effectively classify time series with varying input lengths, sensors, and sampling rates; and (2) our model demonstrates resilience to within-modal and across-modal distributional shifts, shows robust performance when applied in modality mismatch scenarios that are common in real-world applications. While our model provides a good balance between utilizing frequency-information and operating on time domain, we note that, just like other frequency-aware architectures [Li et al., 2020b], it remains underexplored how to interpret the specific band and type of frequency information that is taking effect in each downstream task. Exploring how the learned frequency filters can be structured and interpreted will be an exciting line of future research. Also, in our current formulation, we only consider low-density biosignal recording systems due to the lack of publicly available high-dimensional multimodal biosignal datasets. Given the constraints, our architecture relies on the channel-independent design, which is known to suffer from capacity and robustness trade-off [Han et al., 2023]. Extending and scaling our approach to high-dimensional sensor inputs is another exciting line of future research for modeling comprehensive human states. REFERENCES Hassan Akbari, Liangzhe Yuan, Rui Qian, Wei-Hong Chuang, Shih-Fu Chang, Yin Cui, and Boqing Gong. Vatt: Transformers for multimodal self-supervised learning from raw video, audio and text. *Advances in Neural Information Processing Systems*, 34:24206–24221, 2021. Ralph G Andrzejak, Klaus Lehnertz, Florian Mormann, Christoph Rieke, Peter David, and Christian E Elger. Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state. *Physical Review E*, 64(6):061907, 2001. Roman Bachmann, David Mizrahi, Andrei Atanov, and Amir Zamir. Multimae: Multi-modal multi-task masked autoencoders. In *Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXVII*, pp. 348–367. Springer, 2022. Jordan J Bird, Jhonatan Kobylarz, Diego R Faria, Anikó Ekárt, and Eduardo P Ribeiro. Cross-domain mlp and cnn transfer learning for biological signal processing: Eeg and emg. *IEEE Access*, 8:54789–54801, 2020. E Oran Brigham. *The fast Fourier transform and its applications*. Prentice-Hall, Inc., 1988. Filipe Canento, Ana Fred, Hugo Silva, Hugo Gamboa, and André Lourenço. Multimodal biosignal sensor data handling for emotion recognition. In *SENSORS, 2011 IEEE*, pp. 647–650. IEEE, 2011. Wenhao Chai and Gaoang Wang. Deep vision multimodal learning: Methodology, benchmark, and trend. *Applied Sciences*, 12(13):6588, 2022. Joseph Y Cheng, Hanlin Goh, Kaan Dogrusoz, Oncel Tuzel, and Erdrin Azemi. Subject-aware contrastive learning for biosignals. *arXiv preprint arXiv:2007.04871*, 2020. Hsiang-Yun Sherry Chien, Hanlin Goh, Christopher M Sandino, and Joseph Y Cheng. Maeeg: Masked auto-encoder for eeg representation learning. *arXiv preprint arXiv:2211.02625*, 2022. Ha David, Dai Andrew, and VL Quoc. Hypernetworks. *arXiv preprint arXiv*, 1609, 2016. Virginia R De Sa and Dana H Ballard. Category learning through multimodality sensing. *Neural Computation*, 10(5):1097–1117, 1998. Charmaine Demanuele, Christopher J James, and Edmund JS Sonuga-Barke. Distinguishing low frequency oscillations within the 1/f spectral behaviour of electromagnetic brain signals. *Behavioral and Brain Functions*, 3(1):1–14, 2007. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Jiaxiang Dong, Haixu Wu, Haoran Zhang, Li Zhang, Jianmin Wang, and Mingsheng Long. Simmtm: A simple pre-training framework for masked time-series modeling. *arXiv preprint arXiv:2302.00861*, 2023. Thomas Donoghue, Matar Haller, Erik J Peterson, Paroma Varma, Priyadarshini Sebastian, Richard Gao, Torben Noto, Antonio H Lara, Joni D Wallis, Robert T Knight, et al. Parameterizing neural power spectra into periodic and aperiodic components. *Nature neuroscience*, 23(12):1655–1665, 2020. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee Keong Kwoh, Xiaoli Li, and Cuntai Guan. Time-series representation learning via temporal and contextual contrasting. *arXiv preprint arXiv:2106.14112*, 2021.
YnaGcMJQ0M
One analogy is the subset of OOD methods that try to fit a density to the test points to do likelihood ratio tests (the density is evaluated on each test datapoint but is the result of learning on all test points). If I understand correctly, though you briefly acknowledge using geometry of the whole test set as a motivation, you eventually do not return to a discussion on this particular aspect of the setup. I think it's worth differentiating between methods that use the whole test set for each test point's decision or not. Could you provide more discussion on this (what to take away from it, and what could be inspired by it in the future)? You could potentially consider introducing some new tasks as well such as making decisions on sets of points. Apologies if you discuss this in more detail and I missed it.
Detecting Out-of-Distribution Samples via Conditional Distribution Entropy with Optimal Transport Anonymous authors Paper under double-blind review Abstract When deploying a trained machine learning model in the real world, it is inevitable to receive inputs from out-of-distribution (OOD) sources. For instance, in continual learning settings, it is common to encounter OOD samples due to the non-stationarity of a domain. More generally, when we have access to a set of test inputs, the existing rich line of OOD detection solutions, especially the recent promise of distance-based methods, falls short in effectively utilizing the distribution information from training samples and test inputs. In this paper, we argue that empirical probability distributions that incorporate geometric information from both training samples and test inputs can be highly beneficial for OOD detection in the presence of test inputs available. To address this, we propose to model OOD detection as a discrete optimal transport problem. Within the framework of optimal transport, we propose a novel score function known as the conditional distribution entropy to quantify the uncertainty of a test input being an OOD sample. Our proposal inherits the merits of certain distance-based methods while eliminating the reliance on distribution assumptions, a-priori knowledge, and specific training mechanisms. Extensive experiments conducted on benchmark datasets demonstrate that our method outperforms its competitors in OOD detection. 1 Introduction Training a machine learning model often assumes that the training samples and test inputs are drawn from the same distribution. However, when deploying a trained model in the open-world, out-of-distribution (OOD) inputs, which come from a different distribution of training samples, i.e., in-distribution (ID) samples, are inevitable. For example, in continual learning settings, with the inherent non-stationarity of a domain, it’s typical to observe samples in a test setting which are Out-Of-Distribution (OOD) w.r.t. the training set (Garg et al., 2023). The ignorance or overconfidence with OOD inputs leads to unwanted model predictions. Therefore, a trustworthy machine learning model should keep a sharp lookout for OOD and ID inputs. A flurry of works (Hendrycks & Gimpel, 2017; Sastry & Oore, 2020; Fang et al., 2022; Fort et al., 2021; Liu et al., 2020; Nandy et al., 2020) has been proposed on OOD detection; recent research attention is drawn by distance-based OOD detection (Schwag et al., 2021; Sun et al., 2022) for its promising performance. Distance-based methods utilize some metrics (Weinberger & Saul, 2009; Kulis et al., 2013) in the feature representation space for differentiating OOD inputs from ID samples, with the built-in assumption that an OOD sample stays relatively far from ID samples. For example, some works (Schwag et al., 2021; Ming et al., 2023) use statistical information (e.g., mean, variance) of in-distribution and calculate the Mahalanobis distance as the score function. The performance depends on the fitness between the parameterized Gaussian distribution and real distribution of the training data (Morteza & Li, 2022; Maciejewski et al., 2022). However, the distributional assumption may not hold in some non-stationarity scenarios, such as continual learning or domain adaption, presenting a pitfall in outlier detecting and thus giving rise to the risk of inconsistent estimation. To alleviate it, (Sun et al., 2022) propose to only use $k$-th nearest training sample of a test input as the score function for OOD detection. With simplicity and efficacy, the method roots in pair-wise distance comparison, leaving untapped potential for exploiting population-wise information, as exposed in continual learning settings, where a set of accessible test inputs is provided. The above limitations motivate us to study the following question: Can we leverage the empirical distributions of both training samples and test inputs with geometric information to discriminate out-of-distribution data? In this paper, we claim that the empirical distributions incorporating geometric information are beneficial for OOD detection in the presence of a set of accessible test inputs. This idea presents a significant departure from previous works in several key aspects. First, the idea focus on analyzing distributional discrepancies while incorporating geometric structure information in the feature space, which distinguishes it from the pair-wise distance comparison and allows it to leverage population-wise information for improving the performance. Second, the empirical distribution refers to the observed data distribution rather than the one conforming to a hypothesis, potentially eliminating the distribution assumption (e.g., multivariate Gaussian distribution assumption in Mahalanobis distance). Third, through the consideration of multiple test inputs, the mutual benefits of test inputs in OOD detection can be fully exploited, which has good potentials in some scenarios, such as continual learning and domain adaption, where we have access to a set of test inputs or even the entire test data. Following this line of thoughts, we hereby propose a novel OOD detection method based on optimal transport theory. We construct empirical probability measures for training samples and test inputs, which lifts the Euclidean space of feature representations to a probability space. There are two advantages of doing so: 1) the empirical probability measures utilize the distribution information without assumptions about the underlying distribution; 2) it enables the measurement of the discrepancy between probability measures, which captures the significant difference between their corresponding supports, representing (training and test) samples. Combining pair- and population-wise information, optimal transport provides a geometric way to measure the discrepancy between empirical probability measures, making a basis for discriminating OOD samples. Then, to measure the uncertainty of a test input being an OOD sample, we propose a novel score function, called as conditional distribution entropy. The sensitivity of conditional distribution entropy in capturing OOD inputs is enabled by the paradigm of mass split under marginal constraints in discrete optimal transport, where the mass of the OOD input transported to training samples is dominated by that of ID test inputs. In particular, an ID input with a certain conditional transport plan corresponds to a low conditional distribution entropy, while an OOD input with an uncertain conditional transport plan corresponds to a high conditional distribution entropy. We conduct extensive experiments on benchmark datasets to gain the insights into our proposals. The results show that our proposed method is superior to the baseline methods. In particular, on challenging tasks, such as CIFAR-100 vs. CIFAR-10, our method achieves 14.12% higher AUROC and 11.12% higher AUPR than the state-of-the-art solution KNN+. 2 PRELIMINARIES 2.1 Problem Definition Definition 2.1 (OOD Detection). Let \( \mathbb{X} \) be the sample space and \( \mathbb{Y} = \{1, 2, ..., K\} \) be the label space. Given a training dataset \( D_{tr} \), we assume data is sampled from the joint distribution \( P_{xy}^{tr} \) over the joint space \( \mathbb{X} \times \mathbb{Y} \). A trustworthy machine learning model is expected to not only accurately predict on known ID inputs, but also identify unknown OOD inputs (Sun et al., 2022). In the open world, test inputs \( D_{te} \) consists of both ID and OOD data. Given a set of test inputs from \( P_{xy}^{tr} \times P_{xy}^{ood} \), the goal of OOD detection is to identify whether an input \( x \in \mathbb{X} \) is from ID (\( P_{x}^{id} \)) or OOD (\( P_{x}^{ood} \)). 2.2 Optimal Transport The optimal transport (Villani, 2003) is to seek an optimal transport plan between two probability measures at the minimal cost, measured by a Wasserstein distance. The basic format is shown as follows. More details can be found in (Peyre & Cuturi, 2018). Wasserstein Distance. Let \( S \) be a locally complete and separable metric space, \( P(S) \) be a Borel probability measure set on \( S \). For any \( X, X' \subset S \), assuming probability measures \( \mu \in P(X) \) and ν ∈ 𝒫(𝒳′), the optimal transport defines a Wasserstein distance between μ and ν, formulated as \[ W_p(\mu, \nu) := \left( \inf_{\pi(\mu, \nu)} \int_{\mathcal{X} \times \mathcal{X}'} ||x - x'||^p d\pi(\mu, \nu) \right)^{\frac{1}{p}} \] \( p \geq 1 \), where the \( \pi(\mu, \nu) \) is the set of joint probability measures with marginals \( \mu \) and \( \nu \). **Discrete Optimal Transport.** Let \( \Delta_n = \{ \alpha \in \mathbb{R}_+^n | \sum_{i=1}^{n} \alpha_i = 1, \forall \alpha_i \geq 0 \} \) be an \( n \)-dimensional probability simplex. Consider two empirical probability measures \( \mu = \sum_{i=1}^{n} \alpha_i \delta_{x_i} \) and \( \nu = \sum_{j=1}^{m} \beta_j \delta_{x'_j} \), defined on metric spaces \( \mathcal{X} \) with support \( \{x_i\}_{i=1}^{n} \) and \( \mathcal{X}' \) with support \( \{x'_j\}_{j=1}^{m} \), respectively. Here, the weight vector \( \alpha = (\alpha_1, \alpha_2, ..., \alpha_n) \) and \( \beta = (\beta_1, \beta_2, ..., \beta_m) \) live in \( \Delta_n \) and \( \Delta_m \), respectively. The \( \delta \) stands for the Dirac unit mass function. Then, given a transport cost \( c : \mathcal{X} \times \mathcal{X}' \rightarrow \mathbb{R}_+ \), the discrete optimal transport between probability measures \( \mu \) and \( \nu \) can be formalized as: \[ W_p^c(\mu, \nu) := \min_{P \in \Pi(\mu, \nu)} \langle C, P \rangle_F \quad s.t. \quad P1_m = \mu, \quad P^T1_n = \nu \] where \( C \in \mathbb{R}_+^{n \times m} \) is the transport cost matrix, and element \( c_{ij} \) represents a unit transport cost from \( x_i \) to \( x'_j \). The \( P \in \mathbb{R}_+^{n \times m} \) is the transport plan and \( P^T \) is the transpose of \( P \). The \( 1 \) denotes the all-ones vector. All feasible transport plans constitute the transport polytope \( \Pi(\mu, \nu) \). The \( \langle C, P \rangle_F \) is the Frobenius inner product of matrices, which equals to \( \text{tr}(C^TP) \). ### 3 Method **Overview.** In this section, we study the method for OOD detection based on optimal transport theory. In Section 3.1, we first construct empirical probability measures for training samples and test inputs; then transform the problem of distributional discrepancy between probability measures as the problem of the entropic regularized optimal transport, yielding the optimal transport plan. In Section 3.2, we study how to use the optimal transport plan for modeling the score function for OOD detection. In Section 3.3, we investigate how to leverage supervised contrastive training (Khosla et al., 2020) to extract compact feature representations. Lastly, we extend our proposal to the unsupervised setting, in presence of training data without labels, which shows the generality of our proposal. The framework of our method is shown in Figure 3 (see Appendix A). #### 3.1 Optimal Transport for OOD Detection **Feature Extraction.** Given a training dataset with \( N \) samples \( D_{tr}^{in} = \{(x_i, y_i)\}_{i=1}^{N} \), where \( x_i \) represents the \( i \)-th sample and \( y_i \) denotes the corresponding label. The functionality of the feature extraction can be represented as a function \( f : \mathbb{X} \rightarrow \mathbb{V} \) that maps an input sample from the \( n \)-dimensional input space \( \mathbb{X} \subseteq \mathbb{R}^n \) to a \( d \)-dimensional feature space \( \mathbb{V} \subseteq \mathbb{R}^d \). In this way, we obtain a set of feature representations \( \{f(x_i)\}_{i=1}^{N} \) of the training samples, which are used for the subsequent tasks of OOD detection. **Empirical Probability Measure.** After the features are extracted, to utilize empirical distributions, we first construct a probability measure \( \mathcal{P} \) over the low-dimensional feature space \( \mathbb{V} \), which potentially lifts the Euclidean feature space \( \mathbb{V} \) to a probability space \( \mathbb{P} = \{\mathbb{V}, \sigma, \mathcal{P}\} \), where \( \sigma \) denotes a well-defined algebra structure (Tao, 2011). Let the cardinality of the training dataset \( D_{tr}^{in} \) and test inputs \( D_{te} \) be \( N \) and \( M \), respectively, we define the discrete empirical probability measures of \( D_{tr}^{in} \) and \( D_{te} \) as \( \mu \) and \( \nu \), respectively, which are formulated as: \[ \mu = \sum_{i=1}^{N} \alpha_i \delta_{x_i}, \quad \nu = \sum_{j=1}^{M} \beta_j \delta_{x'_j}, \] where \( \delta \) is a Dirac unit mass function and \( x_i \) (likewise for \( x'_j \)) denotes \( i \)-th feature representation \( f(x_i) \), i.e., the support of probability measure \( \mu \). For simplicity, we use uniform weights for \( \alpha \) and \( \beta \). --- 1 We normalize the features to a hypersphere, where inner production or cosine distance between feature vectors are natural choices. Please refer to Section 3.3 for more details. Algorithm 1 Conditional Distribution Entropy Score Function Input: probability measure \( \mu \) and \( \nu \), cost matrix \( C \), regularization coefficient \( \lambda \) Initialize \( K = \exp(-C/\lambda) \), \( u \in \mathbb{R}_+^N \), \( v \in \mathbb{R}_+^M \) while \((u,v)\) not converged do \( u = \mu \odot (Kv) \) \{ \( \odot \): point-wise division\} \( v = \nu \odot (K^Tu) \) end while \( P^* \leftarrow \text{diag}(u)K\text{diag}(v) \) Initialize Res with a null array for \( i = 1 \) to \( M \) do \( \text{Res} \leftarrow \text{condEntropy(col}_i(P)) \) \{append\} end for Entropic Regularized Optimal Transport. The constructed probability measures on the training samples and test inputs enables the measurement of the distributional discrepancy. Then, the problem is how distance information can be encoded in association with the discrepancy of probability measures to get both merits. As mentioned in Section 2.2, optimal transport provides a geometric manner to compare probability measures, paving the road for measuring both pair- and population-wise information. However, conventional optimal transport incurs cubic time complexity, which is prohibitive in real applications. To tackle the challenge, we formulate the OOD detection problem as a discrete optimal transport problem with entropic regularization, denoted as, \[ L_\lambda(\mu, \nu, C) = \min_{P \in \Pi(\mu, \nu)} \langle C, P \rangle_F - \lambda E(P) \quad \text{s.t.} \quad P1_N = \mu, \quad P^T1_M = \nu, \quad \lambda \geq 0 \] where \( E = -\sum_{ij} p_{ij} \log(p_{ij} - 1) \) is the entropy function of transport plan formalized in Definition 3.1, and \( C \in \mathbb{R}^{N \times M} \) is the matrix of pairwise cosine distances between the supports. The element \( p_{ij} \) of transport plan \( P \in \mathbb{R}^{N \times M} \) denotes the mass transported from probability measure \( \mu_i \) to \( \nu_j \). By solving the problem above, we can obtain an optimal transport plan \( P^* \), as described in Theorem 3.2. Definition 3.1. Suppose two discrete random variables \( U \sim \mu \) and \( V \sim \nu \), following \((U,V) \sim \pi(\mu,\nu)\), where \( \pi(\mu,\nu) \) is the joint distribution with marginals \( \mu \) and \( \nu \). The joint entropy of random variables \( U \) and \( V \) is defined as: \[ H(U,V) = -\sum_i \sum_j \pi_{ij} \log(\pi_{ij}) \] The above definition indicates that transport plan \( P \) is essentially a joint distribution with marginals \( \mu \) and \( \nu \). Theorem 3.2. The problem \( L_\lambda(\mu, \nu, C) \) has a unique optimal solution. Proof sketch. The entropy function of transport plan \( E(P) \) is a concave function, which can be evidenced by computing the Hessian matrix with regard to the transport plan. Thus, the problem is \( \lambda \)-strongly convex w.r.t. the transport plan, and therefore has a unique optimal solution. Theorem 3.2 shows that solving the discrete entropic regularization optimal transport problem Equation 3.1, we can obtain a unique optimal transport plan \( P^* \). For more details, we refer to (Boyd et al., 2004; Bertsimas & Tsitsiklis, 1997). Entropic Regularized OT in the Dual. The reason we study the dual problem of entropic regularized optimal transport is the dual problem allows us to obtain an optimal transport plan via matrix scaling (Sinkhorn & Knopp [1967]) for the primal problem (Equation 3), which is computationally efficient (Cuturi [2013]). Moreover, the optimal transport plan can be used for modeling the conditional distribution entropy score function discussed in Section 3.2. Proposition 3.3. The optimal transport plan of the dual problem has the form: \[ P^*_{ij} = e^{(u_i - C_{ij} + v_j)/\lambda} = u_i K_{ij} v_j, \] where the introduced \( u \) and \( v \) are dual variables, and \( K = e^{-C/\lambda} \). Proof sketch. With introducing Lagrangian associated to the primal problem (Equation [3]), we can transform the primal problem with marginal constraints into an unconstrained optimization problem regarding the transport plan and dual variables. Then, taking derivatives on the transport plan leads to the above result. Proposition [3.3] indicates that the optimal transport plan has the matrix multiplication form and satisfies marginal constraints. Thus, it can be regarded as a matrix scaling problem and can be solved with Sinkhorn iteration (Sinkhorn & Knopp [1967]) in quadratic time complexity. ### 3.2 Conditional Distribution Entropy Measures OOD Samples In this section, we introduce a novel score function derived from the optimal transport plan to examine whether an input sample is OOD or not. The transport plan is a joint probability distribution relevant to the discrepancy between probability measures. With the inherent uncertainty of the transport plan, we employ entropy language to model a score function, referred to as conditional distribution entropy score function. Then, we discuss the relationship between entropic regularized optimal transport and the proposed conditional distribution entropy score function. Lastly, we give some additional results in Section A. **Definition 3.4 (Conditional Distribution).** For two discrete random variables \((U, V) \sim \pi(\mu, \nu)\), the conditional distribution of \(U\) given value \(v\) is defined as, \[ \pi_{U|V}(u|v) = \frac{\pi_{UV}(u,v)}{\pi_V(v)} \quad \forall u \in \text{dom}(U) \] (4) **Uncertainty Modeling.** In Definition [3.1], it shows that the transport plan \(P\) can be viewed as a joint probability distribution in the form of a matrix. Given a test input, there is a corresponding column in the transport plan, which is essentially a conditional probability distribution, as defined Definition [3.4]. Accordingly, the entropy of the conditional probability distribution indicates the level of uncertainty regarding a test input belonging to the OOD. To this end, formally, we define the transport that happens on the test input \(v \in \text{dom}(V)\) as a random event, represented by a random variable \(T\) that follows the conditional distribution, i.e., \(T \sim \pi_{U|V}(u|v)\). Then, the score function is denoted as the entropy of conditional distribution \(H(U|V = v)\), as defined in Definition [3.5]. **Definition 3.5 (Conditional Distribution Entropy).** For two discrete random variables \((U, V) \sim \pi(\mu, \nu)\), the entropy of conditional distribution of given value \(v\) is defined as, \[ H(U|V = v) = - \sum_{u}^{\text{dom}(U)} \pi(u|v) \log \pi(u|v) \] (5) With the principle of maximum entropy, the conditional distribution entropy \(H(U|V = v)\) tends to bigger if the corresponding transport plan is more uniform. In other words, it will be a higher chance of being an OOD sample if the given test input has more uncertainty. On the contrary, the entropy value is smaller, if the corresponding transport plan is sparser. Figure 1 illustrates the uncertainty of transport plan. The training samples are \(U = \{u_1\}_{i \leq 6}\) and test inputs are \(\{v_j\}_{j \leq 2}\), where \(v_1\) is an ID sample and \(v_2\) is an OOD sample. It can be observed that the transport from \(v_2\) to \(\{u_1\}_{i \leq 6}\) is almost uniform. In contrast, the transport from \(v_1\) to \(\{u_1\}_{i \leq 6}\) is sparse, since a large portion of the mass is transported from \(v_1\) to \(u_5\) and a small portion of the mass is transported from \(v_1\) to \(U - \{u_5\}\). Thus, the uncertainty of the transport from \(v_2\) to \(U\) is high, whereas the uncertainty of the transport from \(v_1\) to \(U\) is low. Proposition 3.6. Let \((U,V) \sim \pi(\mu,\nu)\), when the regularization coefficient \(\lambda\) converges to \(+\infty\), the conditional entropy of given value \(v\), \(H(U|V=v)\) converges to \(\log|\text{dom}(U)|\). \[ H(U|V=v) \xrightarrow{\lambda \to +\infty} \log|\text{dom}(U)| \] Above proposition reveals the inner relation between the entropic regularization OT and the conditional distribution entropy score function. In other words, how the coefficient of entropic regularization \(\lambda\) affects the performance of conditional entropic score for OOD detection. As \(\lambda\) approaches positive infinity, the performance gradually decreases, where the conditional probability distribution degenerates to a maximum entropy distribution. ### 3.3 Contrastive Training for Feature Representation Our method is training-agnostic, it supports both supervised and unsupervised settings. Therefore, we present two kinds feature trainings used in our method, i.e., supervised contrastive training and self-supervised contrastive training (more details see Appendix A). The procedures of our method, including feature extraction and OOD detection are shown in Algorithm 2. **Supervised Contrastive Loss.** In this work, we employ supervised contrastive training SupCon (Khosla et al., 2020; Tack et al., 2020) to obtain feature representations. The idea of SupCon is to pull together similar samples and push away dissimilar ones. As shown in Figure 4, the supervised contrastive training consists of three major components, data augmentation, encoder, and projection head. The mechanism of the supervised contrastive training works as follows. For each input sample \((x_i, y_i), i \in [1, N]\), a pair of augmented samples, i.e., \((x^1_i, y_i)\) and \((x^2_i, y_i)\), are generated by the data augmentation. Next, the pair of augmented samples are separately fed to the encoder \(f\), so that a pair of normalized representation vectors \((r^1_i, r^2_i)\) are generated. The pair of representation vectors are then mapped into two low-dimensional normalized outputs \((z^1_i, z^2_i)\) by the projection head. At each iteration, we optimize the following loss function: \[ \text{Loss} = \sum_{i=1}^{2N} -\log \frac{1}{|\mathcal{I}(y_i)|} \sum_{k \in \mathcal{I}(y_i)} \frac{e^{z^T_i z_k/\tau}}{\sum_{j=1,j \neq i}^{2N} e^{z^T_i z_j/\tau}}, \] where \(|\mathcal{I}(y_i)|\) is the cardinality of set \(\mathcal{I}(y_i)\) representing the indices of all samples except \((x_i, y_i)\). \(\tau\) is a scalar temperature parameter. ### 4 Experiments In this section, we evaluate our proposed OOD detection method through extensive empirical studies on multiple ID and OOD datasets. Section 4.1 presents the experimental setup. Section 4.2 reports the empirical studies, which demonstrate that our proposed method achieves state-of-the-art performance on multiple benchmark datasets. Section 4.3 conducts detailed ablation studies and analysis to offer insights into our proposals. #### 4.1 Experimental Setup **Datasets.** We evaluate our method on a series of benchmark datasets, including CIFAR-10 (Krizhevsky et al., 2009), CIFAR-100 (Krizhevsky et al., 2009), and SVHN (Netzer et al., 2011). Besides, we conduct experiments on multiple real image datasets, including LSUN (Yu et al., 2015), Place365 (Zhou et al., 2017), Textures (Cimpoi et al., 2014), and tiny ImageNet (Kingma & Welling, 2014). **Evaluation Metrics.** We use the following metrics to evaluate our method: (1) the false positive rate (FPR95) of OOD samples, when the true positive rate of ID samples is at 95%; (2) the area under the receiver operating characteristic curve (AUROC); (3) the area under the precision recall curve (AUPR). --- 2Please refer to Appendix A for proof details. Table 1: Comparison with state-of-the-art methods (including distance-based and non-distance-based methods) on CIFAR-100 as ID, and four other datasets as OOD. The best results are in bold. The data of methods are copied from (Ming et al., 2023). | Method | SVHN FPR↓ | SVHN AUROC↑ | ISUN FPR↓ | ISUN AUROC↑ | LSUN FPR↓ | LSUN AUROC↑ | Textures FPR↓ | Textures AUROC↑ | Places365 FPR↓ | Places365 AUROC↑ | |-----------------|------------|-------------|-----------|-------------|-----------|-------------|---------------|----------------|----------------|----------------| | MSP (Hendrycks & Gimpel, 2017) | 78.89 | 79.80 | 84.61 | 76.52 | 83.47 | 75.28 | 86.51 | 72.53 | 84.38 | 74.21 | | ODIN (Liang et al., 2018) | 70.16 | 84.88 | 79.54 | 79.16 | 76.36 | 80.10 | 82.28 | 75.23 | 82.16 | 75.19 | | Mahalanobis (Lee et al., 2018) | 87.09 | 80.62 | 83.18 | 78.83 | 84.49 | 79.43 | 61.72 | 84.87 | 84.63 | 73.89 | | Energy (Liu et al., 2020) | 66.91 | 85.25 | 66.52 | 84.49 | 59.77 | 86.69 | 79.01 | 79.96 | 81.41 | 76.37 | | GODIN (Hsu et al., 2020) | 74.64 | 84.03 | 94.25 | 65.26 | 93.33 | 67.22 | 86.52 | 69.39 | 89.13 | 68.96 | | SSD (Schwag et al., 2021) | 70.18 | 80.19 | 83.07 | 68.89 | 81.12 | 73.22 | 59.30 | 82.91 | 89.34 | 64.82 | | KNN (Sun et al., 2022) | 60.97 | 84.20 | 71.87 | 81.90 | 71.40 | 78.85 | 70.30 | 81.32 | 78.95 | 76.89 | | Ours | **2.42** | **99.49** | **11.69** | **97.36** | **21.88** | **96.12** | **45.35** | **90.76** | **57.24** | **86.38** | | Method | SVHN FPR↓ | SVHN AUROC↑ | ISUN FPR↓ | ISUN AUROC↑ | LSUN FPR↓ | LSUN AUROC↑ | Textures FPR↓ | Textures AUROC↑ | Places365 FPR↓ | Places365 AUROC↑ | |-----------------|------------|-------------|-----------|-------------|-----------|-------------|---------------|----------------|----------------|----------------| | Proxy Anchor (Kim et al., 2020) | 87.21 | 82.45 | 70.01 | 84.96 | 37.19 | 91.68 | 65.64 | 84.99 | 70.10 | 79.84 | | CE+SimCLR | 24.82 | 94.45 | 66.52 | 83.82 | 56.64 | 89.90 | 63.74 | 82.01 | 86.63 | 71.48 | | CSI (Tack et al., 2020) | 44.53 | 92.65 | 76.62 | 74.98 | 75.58 | 83.78 | 61.61 | 86.47 | 79.08 | 76.27 | | CIDER τ=0.5 (Ming et al., 2023) | 13.86 | 97.07 | 53.96 | 88.59 | 32.62 | 93.62 | 44.41 | 90.46 | 78.38 | 78.64 | | SSD+ (Schwag et al., 2021) | 16.66 | 96.96 | 77.05 | 83.88 | 44.65 | 91.98 | 44.21 | 90.98 | 74.48 | 79.47 | | KNN (Sun et al., 2022) | 37.26 | 93.12 | 71.58 | 82.48 | 57.97 | 85.63 | 49.60 | 89.10 | 75.53 | 78.44 | | Ours | **12.77** | **97.64** | **30.01** | **94.18** | **9.55** | **98.22** | **38.47** | **92.25** | **52.15** | **89.93** | Baselines. We consider a series of competitors, which can be classified into two categories. The first category consists of models learned without contrastive training, including MSP (Hendrycks & Gimpel, 2017), ODIN (Liang et al., 2018), Energy (Liu et al., 2020), Mahalanobis (Lee et al., 2018), and GODIN (Hsu et al., 2020). The second category consists of the models learned with contrastive training, including Proxy Anchor (Kim et al., 2020), CSI (Tack et al., 2020), CE+SimCLR (Winkens et al., 2020), and CIDER (Ming et al., 2023). Moreover, for a fair comparison, we reproduce two state-of-the-art methods, i.e., SSD and KNN, with ResNet-18 network model following the parameters in (Schwag et al., 2021; Sun et al., 2022). Implementation Details. We use the same network configurations across all trainings. The simple ResNet-18 (He et al., 2016) architecture was employed as the backbone, which is trained using SGD optimizer with the following settings: a momentum of 0.9, a weight decay of $10^{-4}$, and a batch size of 512. The learning rate is initialized as 0.5 with cosine annealing. The temperature $\tau$ is 0.1. The dimension of the encoder output is 512, and the dimensionality of the projection head is 128. The network is trained for 500 epochs. We set entropic regularization coefficient $\lambda$ as 0.5 and 0.1 for SimCLR and SupCon, respectively. 4.2 Results Performance Comparison. We compare the performance of our method with baseline methods, in Table 1. We use CIFAR-100 as ID for training, and the mix of CIFAR-100 testing and a series of other datasets, including SVHN, LSUN, Textures, and Places365, for test inputs. For distance-based method and our method, a light-weighted network backbone, i.e., ResNet18, is used. We can draw two major observations from the results. First, the performance of our proposed method dominates all its competitors. For example, on LSUN, the FPR of our method is merely 21.4% of the second-best method SSD+; on Places365, the AUROC of our method is 10.1% higher than that of Proxy Anchor. Second, the technique of contrastive training helps in improving the performance of OOD detection for a large portion of datasets. For example, on LSUN, the FPR of our method is improved by over 12% by incorporating the contrastive training. Similar observations are drawn on other methods. Table 2: Unsupervised OOD Detection in AUROC. | ID OOD | CIFAR-10 (CIFAR-100) | CIFAR-10 (SVHN) | CIFAR-100 (SVHN) | |--------|----------------------|-----------------|------------------| | Autoencoder (Hawkins et al., 2005) | 51.3 | 2.5 | 3.0 | | VAE (Kingma & Welling, 2013) | 52.8 | 2.4 | 2.6 | | PixelCNN++ (Salimans et al., 2017) | 52.4 | 15.8 | - | | Deep-SVDD (Ruff et al., 2018) | 52.1 | 14.5 | 16.3 | | Rotation-loss (Gidaris et al., 2018) | 81.2 | 97.9 | 94.4 | | SSD (Schwag et al., 2021) | 89.2 | 99.1 | 95.6 | | Ours | **90.0** | **99.2** | **98.4** | Table 3: Hard OOD Detection Task. | Method | AUROC↑ | AUPR↑ | FPR↓ | |--------|--------|-------|------| | SimCLR SSD | 67.25 | 63.30 | 88.72 | | Ours | **79.94** | **74.81** | **79.47** | | SupCE KNN | 76.94 | 72.98 | 81.78 | | SSD | 61.89 | 59.34 | 91.00 | | Ours | **80.56** | **75.67** | **79.65** | | SupCon KNN+ | 70.10 | 67.83 | 85.91 | | SSD+ | 68.03 | 64.34 | 87.88 | | Ours | **84.22** | **78.95** | **77.00** | Unsupervised OOD Detection. We show the result of unsupervised OOD detection in Table 2. The datasets used for experiments are selected following the setting of (Sehwag et al., 2021). It shows that our method outperforms all its competitors in unsupervised OOD detection. For example, on CIFAR-100 vs. SVHN, the AUROC of our method is 2.8% higher than that of SSD, the second-best method, showing the superiority and generality of our method. Performance on Hard Task. It is known that the OOD detection task is challenging if OOD samples are semantically similar to ID samples (Winkens et al., 2020). We choose the task of CIFAR-100 vs. CIFAR 10, which is commonly adopted as hard task (Ming et al., 2023) for the evaluation of OOD detection algorithms. Then, we evaluate our method on the hard task by comparing with two state-of-the-art distance-based methods, i.e., SSD and KNN. The consistent dominance of our method is observed in all the three training mechanisms, in terms of all evaluation metrics. It shows the superiority of our method in contending with hard task. Figure 2: Performance with different number of test inputs on CIFAR-100 vs. CIFAR-10 (in AUROC). Performance with different No.Test Inputs. Out of the consideration of simplicity, we assume the availability of the full test set, which may be a strict setting. Therefore, we divide the test set into a number of equally sized batches and provide the results under different number of test inputs, i.e. No.Test Inputs. As shown in Figure 2, the performance of our method gradually increases and then converges. However, the performance of SSD does not vary with the number of test inputs. It is worth noting that our method outperforms SSD when the number of test samples is over 32 regardless of the training loss, which further demonstrates the superiority of our method. 4.3 Ablation Studies In this section, we study the effect of different components of our proposed method for OOD detection performance. For consistency, all ablation studies are based on the hard task CIFAR-100 vs. CIFAR-10 (in AUROC) under supervised contrastive training. Effect of temperature $\tau$. We study the effect of temperature for the performance of OOD detection. We tune the value of temperature from 0.001 to 0.5 for SupCon and SimCLR, as shown in Figure 3(a). The results show that, with the increase of temperature, the AUROC tends to increase for the supervised contrastive training. However, the temperature has less impact on performance of unsupervised contrastive training. Effect of Contrastive Training. We study the effect of contrastive training on the performance under SupCE and SupCon in Figure 3(b). The performance is examined by testing on three tasks, CIFAR-10 vs. CIFAR-100, CIFAR-100 vs. CIFAR-10, and CIFAR-10 vs. SVHN. The result shows that SupCon outperforms SupCE across all three datasets. To some extent, it shows that the contrastive training helps in improving the OOD detection performance. **Effect of Training Epochs.** We examine the effect of epochs over the training methods in Figure 3 (c). It shows that, as the increase of epochs, the AUROC of SupCE increases. In contrast, insignificant variation of the AUROC is observed for both SimCLR and SupCon. It implies that the epochs of training do not have a significant effect for methods based on contrastive training. **Effect of Regularization Coefficient $\lambda$.** In Section 3.2, we have theoretically analyzed how the entropic regularization coefficient $\lambda$ affects the score function. Proposition 3.3 shows that the joint entropy of transport plan would increase until the convergence to a product measure, as the increase of the entropic regularization coefficient $\lambda$. In this case, the entropic score value is a maximal entropy, which is equivalent to a random prediction. Therefore, $\lambda$ should be initialized with a relative small value to obtain a good OOD detection performance. As shown in Figure 3 (d), the trend w.r.t. the increase of $\lambda$ conforms to our theoretical analysis. Also, the observation is consistent for all three training cases, i.e., the performance of OOD detection decreases as the increase of regularization coefficient $\lambda$. In our implementation, we set the value of $\lambda$ as 0.1 for deployment. **Effect of Optimal Transport.** We report the effect of optimal transport for OOD detection in Table 4. The performance is examined by using AUROC and FPR as evaluation metrics. As shown in Table 4, optimal transport provides a consistent dominance on the two tasks, i.e., CIFAR-100 vs. CIFAR-10 and CIFAR-10 vs. CIFAR-100, and this fact is observed for both SupCon and SimCLR. In particular, on the task of CIFAR-10 vs. CIFAR-100, using OT can improve the performance by up to 26.6% on AUROC, and 54.1% on FPR, for SupCon. Even for the hard task of CIFAR-100 vs. CIFAR-10, detecting OOD samples with OT achieves remarkable performance, where the AUROC is steadily above 79%. Optimal transport provides a geometric way to differentiate the discrepancy between empirical probability measures. As a result, our approach is essentially utilizing two types of information, i.e., distance information and distributional information. Thus, without optimal transport, our approach may degenerate to the kind of method that uses only distance information, such as KNN. ### 5 CONCLUSION In this paper, we propose to utilize the discrepancy of empirical distributions for enhancing the performance of OOD detection. We apply discrete optimal transport with entropic regularization to measure the discrepancy between training samples and test inputs. To measure the chance of a test input being an OOD sample, we present a novel conditional distribution entropy score function. We offer theoretical justification and empirical verification to get insights into our proposals. Our method inherits the merit of distance-based method, i.e. parameter-free, training-agnostic, and prior-free. In particular, our method gives prominence in combining the pair- and population-wise information, and therefore offers significant improvement over state-of-the-art OOD detection methods. Extensive experiments on benchmark datasets show the superiority of proposed method. | Dataset | Method | SimCLR | SupCon | |------------------|--------|--------|--------| | CIFAR-10 vs. | w/o OT | 70.70 | 90.48 | | CIFAR-100 | | 65.28 | 97.51 | | CIFAR-100 vs. | w/o OT | 61.96 | 91.55 | | CIFAR-10 | | 54.35 | 94.55 | | CIFAR-100 vs. | with OT| 79.94 | 79.47 | | CIFAR-10 | | 84.22 | 77.00 | Figure 3: Ablation studies on CIFAR-100 vs. CIFAR-10 (in AUROC). REFERENCES Dimitris Bertsimas and John N Tsitsiklis. *Introduction to linear optimization*, volume 6. Athena Scientific Belmont, MA, 1997. Stephen Boyd, Stephen P Boyd, and Lieven Vandenberghe. *Convex optimization*. Cambridge university press, 2004. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pp. 1597–1607. PMLR, 2020. Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. Describing textures in the wild. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 3606–3613, 2014. Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In *NIPS*, 2013. Zhen Fang, Yixuan Li, Jie Lu, Jiahua Dong, Bo Han, and Feng Liu. Is out-of-distribution detection learnable? *ArXiv*, abs/2210.14707, 2022. Stanislav Fort, Jie Ren, and Balaji Lakshminarayanan. Exploring the limits of out-of-distribution detection. *Advances in Neural Information Processing Systems*, 34:7068–7081, 2021. Sahil Garg, Sanghamitra Dutta, Mina Dalirrooyfard, Anderson Schneider, and Yuriy Nevmyvaka. In-or out-of-distribution detection via dual divergence estimation. In *Uncertainty in Artificial Intelligence*, pp. 635–646. PMLR, 2023. Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. In *International Conference on Learning Representations*, 2018. URL https://openreview.net/forum?id=S1v4N210- Simon Hawkins, Hongxing He, Graham Williams, and Rohan Baxter. Outlier detection using replicator neural networks. In *Data Warehousing and Knowledge Discovery: 4th International Conference, DaWaK 2002 Aix-en-Provence, France, September 4–6, 2002 Proceedings 4*, pp. 170–180. Springer, 2002. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In *European conference on computer vision*, pp. 630–645. Springer, 2016. Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. *Proceedings of International Conference on Learning Representations*, 2017. Dan Hendrycks, Mantas Mazeika, and Thomas Dietterich. Deep anomaly detection with outlier exposure. In *International Conference on Learning Representations*, 2019. URL https://openreview.net/forum?id=HyxCxhRcY7 Yen-Chang Hsu, Yilin Shen, Hongxia Jin, and Zsolt Kira. Generalized odin: Detecting out-of-distribution image without learning from out-of-distribution data. *2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 10948–10957, 2020. Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. *Advances in Neural Information Processing Systems*, 33:18661–18673, 2020. Sungyeon Kim, Dongwon Kim, Minsu Cho, and Suha Kwak. Proxy anchor loss for deep metric learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 3238–3247, 2020. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In *ICLR*, 2014.
6pPYRXKPpw
What is the action space used by the environments? In the diffusion policy paper they show that diffusion policies are better for some absolute action spaces while being worse for others relative action spaces. Clarification as to that would be great.
Towards Diverse Behaviors: A Benchmark for Imitation Learning with Human Demonstrations Xiaogang Jia∗†‡ Denis Blessing† Xinkai Jiang†‡ Moritz Reuss‡ Atalay Donat† Rudolf Lioutikov† Gerhard Neumann† † Autonomous Learning Robots, Karlsruhe Institute of Technology ‡ Intuitive Robots Lab, Karlsruhe Institute of Technology Abstract Imitation learning with human data has demonstrated remarkable success in teaching robots in a wide range of skills. However, the inherent diversity in human behavior leads to the emergence of multi-modal data distributions, thereby presenting a formidable challenge for existing imitation learning algorithms. Quantifying a model’s capacity to capture and replicate this diversity effectively is still an open problem. In this work, we introduce simulation benchmark environments and the corresponding Datasets with Diverse human Demonstrations for Imitation Learning (D3IL), designed explicitly to evaluate a model’s ability to learn multi-modal behavior. Our environments are designed to involve multiple sub-tasks that need to be solved, consider manipulation of multiple objects which increases the diversity of the behavior and can only be solved by policies that rely on closed loop sensory feedback. Other available datasets are missing at least one of these challenging properties. To address the challenge of diversity quantification, we introduce tractable metrics that provide valuable insights into a model’s ability to acquire and reproduce diverse behaviors. These metrics offer a practical means to assess the robustness and versatility of imitation learning algorithms. Furthermore, we conduct a thorough evaluation of state-of-the-art methods on the proposed task suite. This evaluation serves as a benchmark for assessing their capability to learn diverse behaviors. Our findings shed light on the effectiveness of these methods in tackling the intricate problem of capturing and generalizing multi-modal human behaviors, offering a valuable reference for the design of future imitation learning algorithms. Project page: https://alrhub.github.io/d3il-website/ 1 Introduction Imitation Learning (IL) (Osa et al., 2018) from human expert data has emerged as a powerful approach for imparting a wide array of skills to robots and autonomous agents. In this setup, a human expert controls the robot, e.g., using tele-operation interfaces, and demonstrates solutions to complex tasks. Leveraging human expertise, IL algorithms have demonstrated remarkable success in training robots to perform a wide range of complex tasks with finesse (Brohan et al., 2022; Zhao et al., 2023; Huang et al., 2023). However, learning from human data is challenging due to the inherent diversity in human behavior (Grauman et al., 2022; Lynch et al., 2020). This diversity, arising from factors such as individual preferences, noise, varying levels of expertise, and different problem-solving approaches, poses a formidable hurdle for existing imitation learning algorithms. In recent years, there has been a notable surge of interest in the development of methods aimed at capturing diverse behaviors (Shafullah et al., 2022; Chi et al., 2023; Reuss et al., 2023). These endeavors are driven by various objectives, including improving generalization capabilities (Merel et al., 2018; Li et al., 2017), enhancing skill transfer between policies (Merel et al., 2018; Kumar et al., 2020), and gaining advantages in competitive games (Celik et al., 2022), among others. Nevertheless, these approaches are often tested on synthetically generated datasets, datasets with limited ∗Correspondence to xiaogang.jia@kit.edu | | D4RL | Robomimic | T-Shape | Relay Kitchen | Block-Push | D3IL (ours) | |------------------|------|-----------|---------|---------------|------------|-------------| | Human Demonstrations | ✓ | ✓ | ✓ | ✓ | X | ✓ | | State Data | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | Visual Data | X | ✓ | ✓ | X | ✓ | ✓ | | Diverse Behavior | X | X | X | ✓ | ✓ | ✓ | | Quant. Diverse Behavior | X | X | X | X | ✓ | ✓ | Table 1: Comparison between existing imitation learning benchmarks: D4RL (Fu et al., 2020), Robomimic (Mandlekar et al., 2021) are used to benchmark offline RL algorithms, but do not focus on diversity. T-Shape Pushing (Florence et al., 2022; Chi et al., 2023) has 2 solutions and is a relatively simple task, while Relay Kitchen (Gupta et al., 2019) has a higher diversity in terms of tasks, however, the behaviors do not require closed-loop feedback due to the limited task variability. Block-Push shows diverse behavior (Florence et al., 2022), however, it is generated by a scripted policy, resulting in a reduced variance and missing fallback strategies. Diverse behavior or limited need for closed-loop feedback. Other datasets are collected directly on real robot platforms and lack simulation environments that can be used for benchmarking other algorithms. While real robot environments are of course preferable to show the applicability of the approaches on the real system, they hinder benchmarking new algorithms against the reported results as that would require rebuilding the real robot setup, which is often infeasible. Furthermore, the majority of existing studies do not provide metrics to quantitatively measure a model’s ability to replicate diverse behaviors, often relying solely on qualitative analysis (Shafiullah et al., 2022). To address these challenges, this work introduces benchmark environments and the corresponding Datasets with Diverse Human Demonstrations for Imitation Learning (D3IL). Our primary aim is to provide a comprehensive evaluation framework that explicitly assesses an algorithm’s ability to learn from multi-modal data distributions. We have designed these datasets to encompass the richness and variability inherent in human behavior, offering more realistic and challenging benchmark scenarios for imitation learning. Moreover, the given environments are challenging as the behavior is composed of multiple sub-tasks and they require policies that heavily rely on closed-loop feedback. To tackle the intricate problem of quantifying a model’s capacity to capture and replicate diverse human behaviors, we introduce tractable metrics. These metrics provide valuable insights into a model’s versatility and adaptability, allowing for a more nuanced assessment of imitation learning algorithms’ performance. Furthermore, we conduct a rigorous evaluation of state-of-the-art imitation learning methods using the D3IL task suite. This evaluation serves as a benchmark for assessing the capability of these methods to learn diverse behaviors, shedding light on their effectiveness, hyper-parameter choices, and the representations they employ by. By analyzing their performance for these realistic human-generated datasets in challenging environments. Here, our analysis includes different backbones of the different IL architectures (MLPs and various versions of transformers), different ways to capture the multi-modality of the action distribution (clustering, VAEs, IBC, various versions of diffusion). Moreover, we analyze the performance using the direct state observations or image observations as input to the policies and draw conclusions on the benefits and drawbacks of these representations. Finally, we evaluate which method can deal best with small datasets. Using all these insights, our study will inform the design and advancement of future imitation learning algorithms. 2 RELATED WORK This section provides an overview of existing benchmarks and related research in the field of robot learning and imitation learning. Table 1 presents a comprehensive overview of the distinctive features that set our work apart from the most closely related benchmarks in the field. RLBench (James et al., 2020) and ManiSkill2 (Mu et al., 2021; Gu et al., 2023) are two recent large-scale benchmarks for robot learning with a large variety of tasks with both, proprioceptive and visual data. However, the demonstrations are generated through motion planning and are hence lacking diversity. Meta-World is another robot learning benchmark with 50 different tasks and a focus on multi-task reinforcement learning (Yu et al., 2020). Yet, no human demonstrations exists for these tasks and most tasks typically only have one solution. D4RL (Fu et al., 2020) proposed standardized environments and datasets for offline reinforcement learning with various tasks. Several other benchmarks specialize in areas such as language-guided multitask learning in simulation (Mees et al., 2022; Lynch et al., 2023; Gong et al., 2023; Zeng et al., 2021; Jiang et al., 2023; Gong et al., 2023), continual learning (Liu et al., 2023) and diverse multi-task real world skill learning (Walke et al., 2023; Bharadhwaj et al., 2023; Heo et al., 2023). A benchmark with a focus on human demonstrations is Robomimic (Mandlekar et al., 2021), featuring 8 different task in simulation and real world environments. The demonstrations, collected by multiple people with various degrees of expertise using the RoboTurk framework (Mandlekar et al., 2018). Yet, the tasks where not designed to show diverse behavior and consequently, none of the tested algorithms was designed to capture the diversity of the behavior. The benchmarks most closely related to D3IL are Block-Push (Florence et al., 2022; Shafiullah et al., 2022), T-Shape Pushing (Florence et al., 2022) and Relay Kitchen Gupta et al. (2019). While Block-Push has 4 different modalities, the behavior is generated by a scripted policy and is hence lacking variance and fall-back solutions in the dataset. T-Shape tasks are limited to a maximum of two modalities. The Relay Kitchen environment (Gupta et al., 2019) contains 7 different tasks with human demonstrations that solve four tasks in sequence but does not require closed-loop sensory feedback. Further, several recent work have shown to achieve nearly optimal performance (Chi et al., 2023; Reuss et al., 2023) on these benchmarks, limiting further progress in these environments. Moreover, none of the mentioned benchmarks offer metrics to evaluate the diversity of the learned behavior. 3 DATASETS WITH DIVERSE DEMONSTRATIONS FOR IMITATION LEARNING In this section, we present a novel suite of tasks known as D3IL - Datasets with Diverse Demonstrations for Imitation Learning. We first introduce metrics for the quantification of diverse behavior (refer to Section 3.1). Subsequently, we delve into the design principles underlying our tasks, which are subsequently introduced at the end of this section. 3.1 QUANTIFYING DIVERSE BEHAVIOR Let \( \mathcal{D} = \{(a_n, s_n)\}_{n=1}^{N} \) be a human-recorded demonstration dataset with actions \( a \in \mathcal{A} \) and states \( s \in \mathcal{S} \). Further, let \( p(a|s) \) be the underlying state-conditional action distribution. The goal of imitation learning is to learn a policy \( \pi(a|s) \approx p(a|s) \). Diversity is characterized by a multimodal action distribution \( p(a|s) \), meaning that there are multiple distinct actions that are likely for a given state \( s \). We refer to this as multimodality on a state-level. However, this multimodality is hard to quantify in most scenarios as we do not have access to \( p(a|s) \). Instead, we look at the multimodality on a behavior-level to define an auxiliary metric that reflects if a model is capable of imitating diverse behaviors. To define behavior-level multi-modality, we introduce discrete behavior descriptors \( \beta \in \mathcal{B} \), for example, which box has been chosen to be pushed first. The space \( \mathcal{B} \) of behavior descriptors are thus task-specific and are discussed in Section 3.3. To evaluate the ability of a trained policy \( \pi(a|s) \) to capture multi-modality, we collected data such that we have an approximately equal number of demonstrations for each behavior descriptor \( \beta \in \mathcal{B} \). Thereafter, we perform simulations in the task environment to compute the agents’ distribution \( \pi(\beta) \) over its achieved behaviors. We then assess the policy’s capability of learning diverse behavior by leveraging the behavior entropy, that is, \[ H(\pi(\beta)) = - \sum_{\beta \in \mathcal{B}} \pi(\beta) \log |\mathcal{B}| \pi(\beta). \] Please note that we employ the logarithm with a base of \( |\mathcal{B}| \) to ensure that \( H(\pi(\beta)) \in [0, 1] \). This choice of base facilitates a straightforward interpretation of the metric: An entropy value of 0 signifies a policy that consistently executes the same behavior, while an entropy value of 1 represents a diverse policy that executes all behaviors with equal probability, that is, \( \pi(\beta) \approx 1/|\mathcal{B}| \) and hence matches the true behavior distribution by the design of the data collection process. Yet, a high behavior entropy can also be achieved by following different behaviors in different initial states in a deterministic manner, and hence, this behavior entropy can be a poor metric for behavior diversity. Hence, we evaluate the expected entropy of the behavior conditioned on the initial state \( s_0 \). If, for the same initial state, all behaviors can be achieved, the conditional behavior entropy is high, while if for the same initial state always the same behavior is executed, this entropy is 0. We define the conditional behavior entropy as $$\mathbb{E}_{s_0 \sim p(s_0)} \left[ H(\pi(\beta|s_0)) \right] \approx -\frac{1}{S_0} \sum_{s_0 \sim p(s_0)} \sum_{\beta \in B} \pi(\beta|s_0) \log_{|\mathcal{B}|} \pi(\beta|s_0),$$ where the expectation is approximated using a Monte Carlo estimate. Here, $S_0$ denotes the number of samples from the initial state distribution $p(s_0)$. ### 3.2 Task Design Principles We build D3IL based on the following key principles: **i.) Diverse Behavior.** Diversity is the central aspect of our task design. We intentionally design our tasks to encompass multiple viable approaches to successful task completion. To quantify the behavior diversity, we explicitly specify these distinct behaviors, each representing a legitimate solution. **ii.) Multiple Human Demonstrators.** To reflect the natural variability in human behavior and to obtain a richer dataset, we have collected demonstration data from multiple human demonstrators. This diversity in data sources introduces variations in the quality and style of demonstrations. **iii.) Variable Trajectory Lengths.** Our tasks incorporate variable trajectory lengths, replicating real-world scenarios where demonstrations may differ in duration. This design choice challenges our learning agents to handle non-uniform data sequences effectively. By accommodating varying trajectory lengths, our approach must learn to adapt and generalize to different time horizons, a critical property for real-world applications. **iv.) Task Variations and Closed-Loop Feedback.** For most tasks, agents need to rely on sensory feedback to achieve a good performance which considerably increases the complexity of the learning task in comparison to learning open-loop trajectories. We achieve this by introducing task variations in every execution. For example, in every execution, the initial position of the objects will be different and the agent needs to adapt its behaviour accordingly. ### 3.3 Task Description and Data Collection We simulate all tasks (T1-T5) using the Mujoco physics engine (Todorov et al., 2012). The simulation environment consists of a 7DoF Franka Emika Panda robot and various objects. For T1-T4, the robot is equipped with a cylindrical end effector to perform object manipulations. The corresponding demonstrations are recorded by using an Xbox game-pad which sends commands to an inverse kinematics (IK) controller. In T5, the robot has a parallel gripper and uses an augmented reality setup for controlling the end effector (Jiang et al., 2024), allowing for more dexterous manipulations. **Avoiding (T1).** The Avoiding task requires the robot to reach the green finish line from a fixed initial position without colliding with one of the six obstacles. The task does not require object manipulation and is designed to test a model’s ability to capture diversity. There are 24 different ways of successfully completing the task and thus $|\mathcal{B}| = 24$. The dataset contains 96 demonstrations in total, comprising 24 solutions with 4 trajectories for each solution. **Aligning (T2).** The Aligning task requires the robot to push a hollow box to a predefined target position and orientation. The task can be completed by either pushing the box from outside or inside and thus $|\mathcal{B}| = 2$. It requires less diversity than T1 but involves complex object manipulation. The dataset contains 1k demonstrations, 500 for each behavior with uniformly sampled initial states. Pushing (T3). This task requires the robot to push two blocks to fixed target zones. Having two blocks and two target zones results in $|B| = 4$ behaviors. The task has more diversity than T2 and involves complex object manipulations. Additionally, the task has high variation in trajectory length caused by multiple human demonstrators with different experience levels. The dataset contains 2k demonstrations, 500 for each behavior with uniformly sampled initial block positions. Sorting-X (T4). This task requires the robot to sort red and blue blocks to their color-matching target box. The ‘X’ refers to the number of blocks. In this work, we test three difficulty levels, i.e., $X \in \{2, 4, 6\}$. The number of behaviors $|B|$ is determined by the sorting order. For $X = 6$, the task has many objects, is highly diverse ($|B| = 20$), requires complex manipulations, has high variation in trajectory length, and is thus more challenging than T1-T3. The dataset contains 1.6k demonstrations for $X = 6$ and an approximately equal number of trajectories for each sorting order. Stacking-X (T5). This task requires the robot to stack blocks with different colors in a (yellow) target zone. The number of behaviors $|B|$ is given by the stacking order. Having three different blocks results in six different combinations for stacking the blocks and hence $|B| = 6$. This task additionally requires dexterous manipulations since the blue box has to be stacked upright and is thus considered to be the most challenging out of all five tasks. Here, ‘X’ does not refer to the difficulty but rather to the evaluation protocol: For Stacking-X with $X \in \{1, 2, 3\}$ we compute the performance metrics of the models that are capable of stacking X blocks. The $\approx 1$k trajectories were recorded using augmented reality (AR) such that each stacking order is equally often present. 4 Benchmarking on D3IL Tasks 4.1 Imitation Learning Algorithms We benchmark a large set of recent imitation learning algorithms. These algorithms can be categorized along three axes. The first axis specifies whether the algorithms are using a history of state observations. For algorithms that do not use history (Sohn et al., 2015; Florence et al., 2022), a standard MLP is used as backbone architecture, while for history-dependent policies (Shafiullah et al., 2022; Pearce et al., 2023; Reuss et al., 2023; Chi et al., 2023) we use a causal masked Transformer Decoder (GPT) based on the implementation of BeT (Shafiullah et al., 2022) as backbone as this architecture has been also adopted by other recent approaches. In our experiments, we use a history of $k = 5$. The second axis categorizes whether the algorithms are predicting only a single action or the future action sequence (Zhao et al., 2023; Chi et al., 2023), which has been recently introduced as | Methods | History | Backbone | Prediction | Diversity | |---------|---------|----------|------------|-----------| | BC-MLP | $k = 1$ | MLP | SA | Deterministic | | VAE-State (Sohn et al., 2015) | $k = 1$ | MLP | SA | VAE | | BC-T-MLP | $k = 1$ | MLP | SA | Cluster | | IBC (Florence et al., 2022) | $k = 1$ | MLP | SA | Implicit | | DDPM-MLP | $k = 1$ | MLP | SA | Disc. Diffusion | | VAE-ACT (Zhao et al., 2023) | $k = 1$ | T-EncDec | AC | VAE | | BC-GPT | $k = 5$ | GPT | SA | Deterministic | | BeT (Shafiullah et al., 2022) | $k = 5$ | GPT | SA | Cluster | | DDPM-GPT (Pearce et al., 2023) | $k = 5$ | GPT | SA | Disc. Diffusion | | BESO (Reuss et al., 2023) | $k = 5$ | GPT | SA | Cont. Diffusion | | DDPM-ACT (Chi et al., 2023) | $k = 5$ | T-EncDec | AC | Disc. Diffusion | Table 2: Categorization of the tested IL algorithms. The algorithms differ in whether they use history information (MLP backbones for no history, Transformer/GPT backbones for history-based models), whether they predict future action sequences (single actions/SA or action chunking/AC, Transformer-Encoder-Decoder/T-EncDec backbones for AC) and how they model diverse behavior (last column). action chunking which has been shown to improve performance in some tasks. For action chunking models, we use a Transformer Encoder Decoder structure (T-EncDec) (Vaswani et al., 2017) based on the implementation of VAE-ACT (Zhao et al., 2023). The third axis categorizes how the algorithms capture behavior diversity. Standard deterministic models are not able to do so, while more recent models use clustering (Shafiullah et al., 2022), VAEs (Zhao et al., 2023; Sohn et al., 2015), implicit models (Florence et al., 2022) as well as discrete-time (Pearce et al., 2023; Chi et al., 2023), or continuous time diffusion models (Reuss et al., 2023). To improve our understanding of the influence of the specific design choices, we also evaluate new variants of these combinations such as history-free (MLP-based) action clustering (BeT-MLP). The full table of used algorithmic setups is depicted in Table 2. A more detailed description of all model architectures is given in Appendix C. 4.2 Experimental Setup We briefly outline the most important aspects of our experimental setup. Evaluation Protocol. To assess a model’s ability to capture diverse behavior, we propose the behavior entropy and the conditional behavior entropy (Section 3.1). For the former, we perform multiple simulations, from which we compute the model’s behavior distribution $\pi(\beta)$. For the latter, we use a randomly sampled but fixed set of initial states $s_0$ and perform multiple simulations for each $s_0$ in order to compute the conditional behavior distribution $\pi(\beta|s_0)$. Alongside the entropy, we also report the success rate which is the fraction of simulations that led to successful task completion. State / Observation Representation. For most experiments, we provide state representation and image observations. For the former, we use handcrafted features that work well empirically. For the latter, we used two different camera views, in-hand and front view with a $96 \times 96$ image resolution. We follow Chi et al. (2023) and use a ResNet-18 architecture as an image encoder for all methods. Model Selection. Model selection is challenging as the training objective does not coincide with task performance (Gulcehre et al., 2020; Paine et al., 2020; Fu et al., 2021; Mandlekar et al., 2021). Moreover, policy performance can exhibit substantial variations from one training epoch to another (Mandlekar et al., 2021). For image-based experiments we evaluate the task performance frequently (after every 1/10th of total training) and chose the model with the best task-performance. Due to high-computational demand for simulation, we split the dataset into training (90%) and validation (10%) and chose the model with the lowest validation loss for state-based experiments. We ensure that the training converges by using 500 epochs for state representation and 200 for image observations. We extensively tune the most important hyperparameters of all methods using Bayesian optimization (Snoek et al., 2012). We report the mean and the standard deviation over six random seeds for all experiments. 4.3 Benchmark using State Representation We present the results utilizing state representations in Table 3. Detailed information regarding individual state representations for the tasks can be found in Appendix A. Through an analysis of the performance and diversity across all methods, we outline our key findings below. Multiple Strategies for Multi-Modal Behavior Learning. This section discusses various strategies employed by different models for learning multi-modal behavior. Deterministic models like BC-MLP and BC-GPT have limitations in this regard since they can only generate a single solution given an initial state $s_0$. On the other hand, VAE-based models like VAE-State and VAE-ACT are capable of generating different behaviors, but they tend to exhibit low behavior entropy, presumably due to a phenomenon often referred to as "mode collapse" (Kingma & Welling, 2013). In contrast, IBC, BeT, and diffusion-based models demonstrate the capacity to learn multi-modal distributions. IBC, for instance, exhibits high diversity in tasks like Avoiding, Pushing, and Sorting-2, with entropy levels slightly superior to DDPM-MLP. BeT is capable of learning diverse solutions for each task, but it comes with a significant performance drop when compared to the deterministic BC-GPT baseline. Notably, diffusion-based methods, especially those incorporating transformer backbones, excel in learning diverse behavior while maintaining strong performance across all tasks. To offer additional insights, we include visual representations of the diverse solutions generated by each method for the Avoiding task in Figure 10. Furthermore, we conduct an in-depth analysis of hyperparameter sensitivity, which is discussed in Section B. This analysis provides a deeper understanding Table 3: Comparison between various Imitation Learning algorithms, some of which incorporate history (BootTest), action chunking (\(\dagger\)), or both using state representations (State Data) and image observations (Visual Data). We present the mean and standard deviation across six random seeds, highlighting the best performance for both history-based and non-history-based models using bold formatting. | State Data | Success Rate | Entropy | Success Rate | Entropy | Success Rate | Entropy | |--------------------|--------------|---------|--------------|---------|--------------|---------| | BC-MLP | 0.666±0.152 | 0.0±0 | 0.708±0.052 | 0.0±0 | 0.522±0.165 | 0.0±0 | | VAE-State | 0.716±0.265 | 0.195±0.111 | 0.579±0.138 | 0.030±0.053 | **0.604±0.059** | 0.170±0.031 | | BeT-MLP | 0.665±0.136 | 0.836±0.071 | 0.507±0.081 | 0.485±0.092 | 0.420±0.049 | 0.628±0.123 | | IBC | **0.760±0.046** | **0.850±0.038** | 0.638±0.027 | 0.300±0.048 | 0.574±0.048 | **0.816±0.0123** | | DDPM-MLP | 0.637±0.055 | 0.801±0.034 | **0.763±0.039** | **0.712±0.064** | 0.569±0.047 | 0.796±0.043 | | VAE-ACT\(\dagger\) | 0.851±0.109 | **0.224±0.173** | **0.891±0.022** | 0.025±0.012 | **0.951±0.033** | 0.070±0.029 | | BC-GPT\(\ddagger\) | 0.833±0.260 | 0.0±0 | 0.833±0.043 | 0.0±0 | 0.855±0.054 | 0.0±0 | | BeT\(\ddagger\) | 0.747±0.030 | 0.844±0.035 | 0.645±0.069 | 0.536±0.105 | 0.724±0.051 | 0.788±0.031 | | DDPM-GPT\(\ddagger\) | 0.927±0.022 | **0.898±0.018** | 0.839±0.020 | 0.664±0.075 | 0.847±0.056 | 0.862±0.017 | | BESO\(\ddagger\) | **0.950±0.0162** | **0.856±0.018** | 0.861±0.016 | 0.711±0.030 | 0.794±0.069 | **0.871±0.017** | | DDPM-ACT\(\ddagger\) | 0.809±0.1066 | 0.863±0.068 | 0.849±0.023 | 0.749±0.041 | 0.920±0.025 | 0.859±0.012 | | State Data | Success Rate | Entropy | Success Rate | Entropy | Success Rate | Entropy | |--------------------|--------------|---------|--------------|---------|--------------|---------| | BC-MLP | 0.444±0.069 | 0.0±0 | 0.058±0.048 | 0.0±0 | 0.016±0.014 | 0.0±0 | | VAE-State | 0.451±0.059 | 0.106±0.051 | 0.079±0.042 | 0.090±0.03 Cock | **0.030±0.016** | **0.08±0.007** | | BeT-MLP | 0.317±0.070 | 0.309±0.059 | 0.003±0.001 Cock | 0.021±0.052 Cock | 0.0±0.0 Cock | 0.0±0 Cock | | IBC | 0.459±0.033 | **0.370±0.071** | **0.080±0.016** | **0.107±0.026** | 0.007±0.005 Cock | 0.02±0 Cock | | DDPM-MLP | **0.460±±0.310 Yet** | **0.120±±0.015 Yet** | 0.007±0.025 Cock | 0.0±0 Cocktail | 0.0±0 Cocktail | 0.0±0 Cocktail | | VAE-ACT\(\dagger\) | 0.848±0.029 | 0.431±0.060 | **0.337±0.084** | **0.372±0.005** | 0.005±0.003 Cock | 0.03±0 Cock | | BC-GPT\(\ddagger\) | 0.312±0.017 | 0.516±0.064 | 0.0±0.0 Cock | 0.0±0.0 Cock | 0.0±0.0 Cock | 0.0±0.0 Cock | | BeT\(\ddagger\) | 0.355±0.026 | **0.559±0.204 Yet** | 0.200±0.072 Cock | 0.57±0.027 Cocktail | 0.102±0.027 Cocktail Cock | 0.24±0.007 Cocktail Cock | | DDPM-GPT\(\ddagger\) | 0.316±0.0689 | **0.685±0.034** | **0.715±0.054** | **0.441±0.042** | 0.120±0.018 Cock | 0.102±0.006 Cocktail Cock | | BESO\(\ddagger\) | **0.741±±0.026** | **0.387±±0.038** | 0.188±0.042 Cock | 0.0±0 Cock | 0.0±0 Cocktail | 0.0±0 Cock | | DDPM-ACT\(\ddagger;\ddagger\)| **0.882±±0.024** | **0.014±±0.048** | **0.300±±0.066** Cock | **0.30±0.05** | 0.0±0.0 Cock | 0.0±0.0 Cocktail | | Image Data | Aligning (T2) | Sorting-2 (T4) | Stacking-1 (T5) | Stacking-2 (T5) | Stacking-3 (T5) | |--------------------|----------------|----------------|--------------|----------------|--------------| | BC-MLP | 0.146±0.045 | **0.0±0** | 0.666±0.08 Cock | 0.0±0 Cock | 0.636±±0.025 | 0.0±0 Cock | | VAE-State | 0.127±±0.071 Cock | **0.125±±0.049** Cock | 0.654±±0.050 Cocktail | 0.150±±0.025 Cocktail | 0.516±±0.025 Cock | 0.825±±0.017 Cock | | BeT-MLP | 0.531±±0.044 Cock | **0.125±±0.049** Cock | 0.654±±0.050 Cock | 0.150±±0.025 Cock | 0.825±±0.017 Cock | 0.825±±0.017 Cock | | IBC | **0.74±±0.030** | **0.125±±0.049** | 0.654±±0.050 Cocktail | 0.150±±0.025 Cocktail | 0.516±±0.025 Cock | 0.825±±0.017 Cocktail | | DDPM-MLP | 0.685±±0.030 Cocktail Yet | **0.125±±0.049** | 0.654±±0.050 Cocktail | 0.150±±0.025 Cocktail | 0.516±±0.025 Cock | 0.825±±0.017 Cocktail | | VAE-ACT\(\dagger\) | **0.74±±0.030** | **0.349±±0.052 Cock** | **0.513±±0.055** | 0.302±±0.064 Cock | 0.42±±0.03 Cock | *0.093±±0.06 Cock* | BC-GPT\(\ddagger\) | **0.74±±0.030** | **0.125±±0.049** | 0.654±±0.050 Cocktail | 0.150±±0.025 Cocktail | 0.516±±0.025 Cock | 0.825±±0.017 Cocktail | | BeT\(\ddagger\) | **...** | **0.302±±0.025** Cock | **0.125±±0.049** | **0.093±±0.017** Cock | 0.516±±0.025 Cock | **0.825±±0.017** Cocktail | | DDPM-GPT\(\ddagger\) | **0.74±±0.030** | **0.125±±0.049** | 0.654±±0.050 Cocktail | 0.150±±0.025 Cocktail | 0.516±±0.025 Cock | 0.825±±0.017 Cocktail | | BESO\(\ddagger\) | **0.74±±0.030** | **0.125±±0.049** | 0.654±±0.050 Cocktail | 0.150±±0.025 Cocktail | 0.516±±0.025 Cock | 0.825±±0.017 Cocktail | | DDPM-ACT\(\ddagger;\ddagger\)| **0.74±±0.030** | **0.125±±0.049** | 0.654±±0.050 Cocktail | 0.150±±0.025 Cocktail | 0.516±±0.025 Cock | 0.825±±0.017 Cocktail | of how hyperparameters influence the ability to learn diverse solutions. Notably, our findings reveal that increasing the number of diffusion steps leads to a substantial enhancement in diversity while maintaining consistent task performance. **Historical Inputs and Prediction Horizons are Important.** A comparison of results between BC-MLP and BC-GPT, VAE-State and VAE-ACT, BeT-MLP and BeT, DDPM-MLP and DDPM-GPT reveals a notable trend: transformer-based methods consistently outperform their MLP-based counterparts. In particular, DDPM-GPT achieves an average 21% improvement in success rate across Avoiding, Aligning, and Pushing tasks compared to DDPM-MLP. While DDPM-MLP exhibits higher entropy on the Aligning task, it lags behind on other tasks. When comparing VAE-MLP to VAE-ACT, VAE-ACT demonstrates a success rate improvement of over 30% on most tasks, indicating its effectiveness in capturing diverse behaviors. More significantly, transformers that incorporate historical inputs or prediction horizons demonstrate the ability to tackle challenging tasks like Sorting-4 and Stacking. However, the combination of historical information with extended prediction horizons (DDPM-ACT) does not appear to provide substantial benefits compared to DDPM-GPT, which does not predict future actions. **Scaling to More Complex Tasks.** Current state-of-the-art methods excel in tasks such as Avoiding, Aligning, and Pushing; however, a notable performance gap emerges in Sorting and Stacking tasks. In the case of Sorting, scaling with an increasing number of objects proves challenging for all methods. Specifically, on the Sorting-6 task, none of the existing methods achieve satisfactory performance. The complexity of the observation space and task diversity significantly increases as we aggregate all box features into a single state vector. Consequently, there is a demand for models capable of learning compact and resilient state representations. ### 4.4 Benchmark using Image Observations We present the results utilizing image observations in Table 3. Detailed information regarding camera setup can be found in Appendix A. Through an analysis of the performance and diversity across all methods, we outline our key findings below. **Comparison between State-Based and Image-Based Policies.** State representations have proven highly effective for tasks demanding precise control (i.e. Aligning), but they do not scale with the number of objects (i.e. Sorting). In such case, state-based approaches may struggle to exploit invariances, which are essential for handling complex scenarios. Conversely, image representations exhibit a notable capacity to handle scenarios involving multiple objects. However, image-based policy performs much worse than state-based policy on the Aligning task, which require precise control. The inherent difficulty in extracting fine-grained details and spatial relationships from images makes it challenging in achieving precise manipulation. **Trade-off in Sequential Information Across Visual Tasks.** From the results of state-based evaluations, it is evident that historical inputs and prediction horizons consistently enhance success rate and entropy on all tasks. Regarding the image-based results, DDPM-GPT shows less improvement than DDPM-MLP on Aligning and Sorting-4 and performs a little worse on Sorting-6. This phenomenon has also been observed in the comparison between BC-GPT and BC-MLP, BeT and BeT-MLP. However, in the most demanding task, Stacking, transformer-based models consistently outperform MLP... models. Notably, DDPM-ACT displays improvements of 11%, 40%, 24% in stacking 1, 2, 3 boxes, respectively, compared to DDPM-MLP. 4.5 Impact of History and Prediction Horizon We conducted a comparison of various choices for history and prediction horizons, as illustrated in Figure 1. Notably, when both history and prediction horizons are greater than 1, DDPM-ACT exhibits improved performance. However, it’s worth noting that, consistent with findings from prior work (Chi et al., 2023), extending the length of observation history and action sequence horizon leads to a decline in both success rate and entropy. This observation underscores the importance of careful design when using a transformer encoder-decoder structure for history and prediction horizons. Additionally, we find that historical inputs significantly enhance the performance of GPT-based policies, and this improvement is consistent with increasing the history length. 4.6 Learning with Less Data The process of recording demonstrations, particularly when involving human demonstrators, is often tedious. To that end, we assess the models’ ability to learn with less training data, we generate four subsets comprising 10%, 25%, 50% and 75% demonstrations for the Aligning task. The results are reported in Figure 2, with detailed results available in Table 6. MLP-based methods (e.g. VAE-State, BeT-MLP) experience a significant performance drop when trained with less data. They display up to 54.4% drop in success rate and 43.7% drop in entropy on the 25% dataset, and are nearly non-functional on the 10% dataset. In contrast, transformer-based methods (e.g. DDPM-GPT, BeT) exhibit a higher tolerance for small amounts of data, with up to 33.3% success rate drop and 20.3% entropy drop on 25% dataset, maintaining more than 15% success rate on 10% dataset. Additionally, we find that BESO exhibits a 43% success rate on the 10% dataset. From these findings, we conclude that transformer architecture generalizes well with less training data and diffusion-based methods seem to be able to regularize the transformer to make it less data-hungry. 5 Conclusion Our introduction of Datasets with Diverse human Demonstrations for Imitation Learning (D3IL), addresses the critical need to evaluate a model’s capability to learn multi-modal behavior. These environments incorporate human data, involve intricate sub-tasks, necessitate the manipulation of multiple objects, and require policies based on closed-loop sensory feedback. Collectively, these characteristics significantly enhance the diversity of behavior in D3IL, setting it apart from existing benchmarks that often lack one or more of these crucial elements. To measure this diversity, we introduce practical metrics that offer valuable insights into a model’s ability to acquire and replicate diverse behaviors. Through a comprehensive evaluation of state-of-the-art methods on our proposed task suite, our research illuminates the effectiveness of these methods in learning diverse behavior. This contribution not only guides current efforts but also provides a valuable reference for the development of future imitation learning algorithms. 6 ACKNOWLEDGMENTS This work was supported by funding from the pilot program Core Informatics of the Helmholtz Association (HGF). NS and GN were supported by the Carl Zeiss Foundation under the project JuBot (Jung Bleiben mit Robotern). Xiaogang Jia and Xinkai Jiang acknowledge the support from the China Scholarship Council (CSC). The authors acknowledge support by the state of Baden-Württemberg through bwHPC, as well as the HoreKa supercomputer funded by the Ministry of Science, Research and the Arts Baden-Württemberg and by the German Federal Ministry of Education and Research. REFERENCES Homanga Bharadhwaj, Jay Vakil, Mohit Sharma, Abhinav Gupta, Shubham Tulsiani, and Vikash Kumar. Roboagent: Generalization and efficiency in robot manipulation via semantic augmentations and action chunking, 2023. Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Tomas Jackson, Sally Jesmonth, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Kuang-Huei Lee, Sergey Levine, Yao Lu, Utsav Malla, Deeksha Manjunath, Igor Mordatch, Ofir Nachum, Carolina Parada, Jodilyn Peralta, Emily Perez, Karl Pertsch, Jornell Quiambao, Kanishka Rao, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Kevin Sayed, Jaspiar Singh, Sumedh Sontakke, Austin Stone, Clayton Tan, Huong Tran, Vincent Vanhoucke, Steve Vega, Quan Vuong, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, and Brianna Zitkovich. Rt-1: Robotics transformer for real-world control at scale. In arXiv preprint arXiv:2212.06817, 2022. Onur Celik, Dongzhuoran Zhou, Ge Li, Philipp Becker, and Gerhard Neumann. Specializing versatile skill libraries using local mixture of experts. In Conference on Robot Learning, pp. 1423–1433. PMLR, 2022. Cheng Chi, Siyuan Feng, Yilun Du, Zhenjia Xu, Eric Cousineau, Benjamin Burchfiel, and Shuran Song. Diffusion policy: Visuomotor policy learning via action diffusion. arXiv preprint arXiv:2303.04137, 2023. Pete Florence, Corey Lynch, Andy Zeng, Oscar A Ramirez, Ayzaan Wahid, Laura Downs, Adrian Wong, Johnny Lee, Igor Mordatch, and Jonathan Tompson. Implicit behavioral cloning. In Conference on Robot Learning, pp. 158–168. PMLR, 2022. Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4rl: Datasets for deep data-driven reinforcement learning. arXiv preprint arXiv:2004.07219, 2020. Justin Fu, Mohammad Norouzi, Ofir Nachum, George Tucker, Ziyu Wang, Alexander Novikov, Mengjiao Yang, Michael R Zhang, Yutian Chen, Aviral Kumar, et al. Benchmarks for deep off-policy evaluation. arXiv preprint arXiv:2103.16596, 2021. Ran Gong, Jiangyong Huang, Yizhou Zhao, Haorun Geng, Xiaofeng Gao, Qingyang Wu, Wensi Ai, Ziheng Zhou, Demetri Terzopoulos, Song-Chun Zhu, et al. Arnold: A benchmark for language-grounded task learning with continuous states in realistic 3d scenes. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023. Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18995–19012, 2022. Jiayuan Gu, Fanbo Xiang, Xuanlin Li, Zhan Ling, Xiqiang Liu, Tongzhou Mu, Yihe Tang, Stone Tao, Xinyue Wei, Yunchao Yao, Xiaodi Yuan, Pengwei Xie, Zhiao Huang, Rui Chen, and Hao Su. Maniskill2: A unified benchmark for generalizable manipulation skills. In International Conference on Learning Representations, 2023.
Jg8y1buQ3r
Can one gain any explainability with regards to the memory module? What does it actually learn? It seems like a black box that has been named memory module and untenably attributed with correlation-extracting functionality.
LLM-driven Hateful Meme Detection via Cross-modal Memorizing and Self-rejection Training Anonymous authors Paper under double-blind review Abstract Hateful meme detection (HMD) is critical for determining whether online multimodal content carries harmful information, which plays a pivotal role in maintaining a harmonious internet ecosystem. HMD is predominantly viewed as a multi-modal task, where the harmful message in memes is expressed through the information conveyed by the combination of visual and text content (e.g., the contradictions between them) rather than that from one modality. Thus, effective modeling and smooth integration of multimodal information are crucial for achieving promising HMD performance. Current research on HMD conventionally models visual and text data independently, subsequently aligns and merges these multi-modal features for HMD predictions. However, existing studies face challenges in identifying hateful information that derives from the complementarities or contradictions between image and text, where in most cases neither image nor text alone carries explicit hateful information. Moreover, these studies do not leverage the capabilities of large language models (LLMs), which have been demonstrated effective in cross-modal information processing. Therefore in this paper, we propose a multimodal approach for HMD following the encoding-decoding paradigm with using LLM and a memory module enhanced by self-rejection training. Particularly, the memory module learns appropriate relationships between image and text that lead to hateful memes, where the resulted information is fed into the LLM and accompanied with visual and text features to predict HMD labels. Self-rejection training performs a discriminative learning according to memory outputs and enhances the memory module to improve HMD. We evaluate our approach on English and Chinese benchmark datasets, where it outperforms strong baselines, demonstrating the effectiveness of all components in it and our model design. Note: This paper contains examples of hate speech. 1 Introduction Multimodal memes are typically characterized as images infused with text that propagate from one individual to another, which have become a widespread form of expression on social media platforms (Kiela et al., 2020; Gomez et al., 2020) and a certain amount of them convey hateful information so that are potential in causing negative emotions and further harm to Internet users. Consider that memes on Internet are fast and widely spread, detecting hateful memes with artificial intelligence (AI) is of great importance for cyberspace maintenance. Therefore, advanced cross-model understanding techniques are expected in doing so to fulfill the requirement of in-time and precise hateful meme detection (HMD), where multimodal modeling becomes particularly pronounced in this task. Figure 1 shows three comparing examples that emphasize the significance of synchronous visual and text understanding, where Figure 1(a) displays hateful memes and the Figure 1(b) and (c) present non-hateful ones, illustrating that different image and text combinations delivering opposite attitude tendencies. Hence, relying solely on modeling images or text proves insufficient for HMD, where a more robust approach necessitates enhanced unified modeling of both modalities. Existing approaches utilize advanced visual and text encoders (such as CLIP (Radford et al., 2021), Flamingo (Alayrac et al., 2022), FLAVA (Singh et al., 2022), and SLIP (Mu et al., 2022), etc.) to 1The code and model will be released in the final version of this paper. extract multimodal features, and subsequently align or fuse them by vector concatenation, outer production, or attentions to improve HMD [Kiela et al., 2019; Li et al., 2019; Radford et al., 2021; Goyal et al., 2022; Nandakumar, 2022; Koutlis et al., 2023]. These models successfully identify hateful memes where images or texts present explicit biases but are unable to effectively recognize hateful information derived from complementarities or contradictions between visual and textual content other than images or text alone. Although there are efforts in utilizing additional resources or using model ensemble to improve HMD [Muenninghoff (2020); Lippe et al., (2020); Sandulescu (2020); Velioglu & Rose (2020); Zhu (2020); Cao et al., (2023)], they mainly enhance the generalization ability through more training data or take advantage of different models, without touching the essential mechanism that leads to hateful information. In addition, existing approaches omit the chance to leverage large language models (LLMs), such as MiniGPT-4 [Zhu et al., (2023)] and LLaVA [Liu et al., 2023a], which have proven effective in a broad range of cross-modal tasks. Therefore, it is expected to further advance HMD approaches with rational and efficient solutions to model appropriate relationships between visual and text semantics. In this paper, we propose a multimodal approach with LLM to enhance HMD through self-rejection training. Our approach learns the relationship between visual and textual content that leads to hateful memes through a memory module, which is pipelined with another two components, a visual encoder capturing image features and an LLM predicting HMD labels. We further propose a self-rejection training procedure to optimize the memory module by rectifying correlation vectors from the memory against direct image-text matching results, so as to better capture essential task-specific information to improve HMD. Evaluations on benchmark datasets demonstrate that our approach outperforms strong baselines and existing approaches, emphasizing the superiority of our memory and self-rejection training for HMD. 2 THE APPROACH Figure 2 illustrates the framework of our approach, where the memory-based HMD pipeline and the self-rejection training process are presented at the top and bottom of the figure, respectively. Overall, the pipeline follows the convention of existing studies to regard HMD as a multimodal classification task, which predicts a label $\hat{Y}$ based on the image $I$ and embedded text $X'$ in a given meme $(I, X')$. Moreover, the proposed self-rejection training enhances the our approach by effectively aligning memories with crucial information (e.g., contradictions between visual and text content) that leads to hateful memes. The following text illustrates the pipeline and self-rejection training in details. 2.1 THE HMD PIPELINE The pipeline of our approach consists of three essential components: visual encoding, cross-modal memorizing, and LLM prompting. Specifically, the visual encoding process ($f_{ve}$) extracts salient features from the input image; the cross-modal memory module ($f_m$) encodes the correlation between visual and text features; the LLM prompting ($f_d$) utilizes the multimodal information to predict the final label $\hat{Y}$. Therefore, our approach is formulated by $$\hat{Y} = f_d(f_{ve}(I), f_m(f_{ve}(I), X'), X, p)$$ where $p$ denotes the prompt for LLM. In the following text, we present each component in detail following the aforementioned processing sequence. Visual Encoding Our approach to encoding visual signals follows the procedure of BLIP2 (Li et al., 2023), with three components: the vision Transformer $f_v$ (Dosovitskiy et al., 2021), the Q-Former $f_q$ (Li et al., 2023), and a linear projection layer. The three modules are sequentially interconnected to extract visual feature $v$ from the input meme $I$ through $$v = f_{\text{ve}}(I) = \text{Linear}(f_q(f_v(I)))$$ In our approach, the vision Transformer $f_v$ distills crucial visual features from the meme, and the Q-Former $f_q$ translates these features into a textual semantic space, then finally, the linear projection layer transforms the resulted representation into latent vectors $v$, ensuring alignment with the dimensional space of hidden states in the subsequent module. Cross-modal Memorizing The memory module is designed to capture crucial information, i.e., correlation between visual and text features leading to hateful memes. In doing so, we propose a memory matrix represented by $N$ vectors (denoted by $[m_1, \cdots, m_N]$), where each memory vector can be interpreted as a potential aspect resulting in hateful information. Memory searching and memory sampling are the main steps in this module, with their details illustrated as follows. Memory searching locates relevant memory vectors according to the encoded multimodal information and assigns appropriate weights to them. For the input multimodal information, in addition to the visual encoding, we obtain the textual representation by averaging the embeddings of all tokens in the input text by $t = \frac{1}{U} \sum_{u=1}^{U} e_u$, with each $e_u \in [e_1 \cdots e_U]$ ($U$ refers to total token number) denoting embedding for its corresponding token. Then, we concatenate visual and text features and obtain the multimodal feature $x_{vt} = v \oplus t$. Afterwards, we compute the weight $w_n$ that measures the semantic similarity between the $n$-th memory vector $m_n$ and $x_{vt}$ by $$w_n = \frac{\exp(x_{vt} \cdot W_m \cdot m_n)}{\sum_{n=1}^{N} \exp(x_{vt} \cdot W_m \cdot m_n)}$$ where $W_m$ is a trainable parameter matrix to align $m_n$ and $x_{vt}$. Finally, we rank all memory vectors in descending order based on their weights and select the top $N'$ vectors (denoted as $m_{n_1} \cdots m_{n_{N'}}$) as the relevant vectors for later processing. Memory sampling further processes memory vectors and outputs a correlation vector $x_m$ that carries the essential correlation information between visual and text features for later steps. In detail, we normalize the weights of the relevant vectors and randomly select one from $m_{n_1} \cdots m_{n_{N'}}$ based on their weights, where higher weights lead to better chance to be selected. Subsequently, we perform... the sampling process $M$ times\footnote{We perform the random selection to facilitate the self-rejection training process illustrated in Section 2.2} and obtain a vector list $m_{n_1}, \ldots, m_{n_M}$, with repetition of the same vector allowed. We then average the list and obtain the output correlation vector $x_m$ by $$x_m = \frac{1}{M} \sum_{m=1}^{M} m_{n_m}$$ (4) where $x_m$ is used in self-rejection training for further enhancing the memory module as well as the output of the memory module for later HMD prediction process. **LLM Prompting** Existing studies on LLM have demonstrated the significant impact of prompting on model performance (Brown et al., 2020; Lester et al., 2021; Ouyang et al., 2022; Liu et al., 2022). For better prompting, we use the visual feature $v$ and the correlation vector $x_m$ as soft prompts to guide our LLM for HMD. Specifically, we feed $v$, $x_m$, as well as the original text $X$, into the LLM to determine the label $\hat{Y}$, i.e., hateful or non-hateful. In doing so, a prompt $p$ is required to instruct the LLM to process the input and predict the HMD label\footnote{An example prompt is “Is the meme hateful or non-hateful?”}. Therefore, we feed $v$, $x_m$, $X$, $p$ into our LLM (i.e., Vicuna (Chiang et al., 2023)) and obtain the hidden vector $h$ from its last layer by $$h = LLM(v, x_m, X, p)$$ (5) Afterwards, we compute the HMD score from the vector $h$ by $$s_h = e_h \cdot h, \quad s_{nh} = e_{nh} \cdot h$$ (6) where $e_h$ and $e_{nh}$ denote trainable embeddings corresponding to the hateful and non-hateful labels and leading to their scores, $s_h$ and $s_{nh}$, respectively. Finally, we compare $s_h$ and $s_{nh}$ and output the final prediction $\hat{Y}$ according to which one is higher. ### 2.2 Self-Rejection Training In this process, we further assess correlation vectors $x_m$ to evaluate whether they contain crucial information (e.g., contradictions between image and text) that lead to hateful information and thus adjust the memory module accordingly so that ensuring it iteratively produces better output. The self-rejection training process consists of two steps: reward model training and rejection sampling, with their details elaborated in the following text. **Reward Model Training** The reward model measures the effectiveness of the encoded correlation vector for representing visual and text features in detecting hateful meme. Therefore, we train the reward model by distinguishing such correlation information embedded in the vectors that are relevant or irrelevant to HMD, so as to ensure the model to assign high scores to those ones helpful to the task. Therefore, we treat HMD-related and irrelevant cross-modal instances as positive and negative samples, respectively, to train the reward model. In doing so, we randomly select an instance, i.e., image-text pair $(I_r, X_r)$, from the training data and treat it as a positive sample. Then we generate a caption $C_r$ for the image in this instance and combine it with the image to form a negative sample $(I_r, C_r)$. Later we apply the same visual encoding and the memory module in our HMD pipeline to compute the correlation vectors for the positive and negative samples by $$v_{m}^{pos} = f_m(f_{ve}(I_r), X), \quad v_{m}^{neg} = f_m(f_{ve}(I_r), C)$$ (7) to obtain positive and negative correlation vectors $v_{m}^{pos}$ and $v_{m}^{neg}$, respectively. Finally, we feed $v_{m}^{a}$ and $v_{m}^{r}$ to the reward model $f_r$, which is a multi-layer perceptron, and compute the scores (denoted as $s_a$ and $s_r$, respectively) for the vectors by $$s_{pos} = \text{sigmoid}(f_r(v_{m}^{pos})), \quad s_{neg} = \text{sigmoid}(f_r(v_{m}^{neg}))$$ (8) and compute the loss $L_r$ to optimize the reward model by $$L_r = -\log s_{pos} - \log(1 - s_{neg})$$ (9) | | HMC | Memeplate | |----------------|-----|-----------| | | Dev | AUROC | Dev | F1 | Test | F1 | | Base (BLIP2) | 71.36±0.24 | 81.05±0.20 | 51.45±0.21 | 45.14±0.26 | 51.72±0.24 | 45.51±0.22 | | +M | 72.06±0.22 | 81.86±0.25 | 52.81±0.21 | 46.08±0.24 | 52.87±0.19 | 46.23±0.23 | | +SRT | 72.44±0.19 | 82.19±0.20 | 53.01±0.24 | 46.44±0.23 | 53.29±0.22 | 46.84±0.20 | | +M+SRT | **72.70±0.20** | **82.88±0.21** | **53.47±0.18** | **46.92±0.22** | **53.63±0.21** | **47.13±0.20** | | | Base (LLM) | Dev | AUROC | Dev | F1 | Test | F1 | |----------------|------------|-----|-----------|-----|------|------|------| | | 76.24±0.30 | 84.46±0.22 | 53.65±0.24 | 47.84±0.28 | 55.42±0.21 | 49.03±0.19 | | +M | 77.08±0.24 | 85.44±0.20 | 55.10±0.18 | 48.98±0.22 | 56.07±0.20 | 49.43±0.26 | | +SRT | 77.46±0.18 | 85.69±0.21 | 55.31±0.20 | 49.34±0.24 | 56.39±0.19 | 49.77±0.22 | | +M+SRT | **78.08±0.24** | **86.84±0.19** | **56.52±0.17** | **50.07±0.23** | **56.83±0.20** | **50.34±0.19** | Table 1: The performance (i.e., the average and standard deviation of different evaluation metrics) of various models on the development and test sets of HMC and Memeplate datasets. “Base (BLIP2)” stands for BLIP2 models used only for HMC and Memeplate; “Base (LLM)” stands for MiniGPT-4 and Ziya-BLIP2-Visual models used only for HMC and Memeplate, respectively. “M” and “SRT” are abbreviations of the memory module and self-rejection training, respectively. * marks the results where improvements are statistically significant at $p \leq 0.05$ level over all baselines. Rejection Sampling This process includes two steps, namely, correlation vector scoring and rejection sampling fine-tuning, which are elaborated as follows. In correlation vector scoring, for a particular input meme $(I, X)$, we run sampling in our memory module $T$ times and get $T$ correlation vectors, denoted as $x_1^m, \ldots, x_T^m$. Then we feed all correlation vectors to the reward model $f_r$ and compute the score for each of them. In rejection sampling fine-tuning, we select the correlation vector with the highest score (denoted as $x^*_m$) and use it as the gold standard to assess whether the correlation vector from the memory module is good enough to carry essential task-specific information for HMD. Finally we compute the loss $$L_{rsft} = \|x^*_m - x_m\|$$ to update the memory module with $\|\cdot\|$ denoting the norm of a vector and $x_m$ obtained from Eq. (4). 3 Experiment Settings 3.1 Datasets We employ two datasets in our experiments, namely, HMC dataset (Kiela et al., 2020) and Memeplate (Li et al., 2022). The HMC is an English dataset including 10,000 instances of memes and their corresponding text. Memeplate is a Chinese dataset for multimodal humor recognition, which contains 203 templates and 5,184 memes with manually annotated humor levels. We use this dataset to further comprehensively evaluate the capability of our approach for HMD, since humor recognition is also a challenging classification task that necessitates a deep understanding of both visual and text elements. For both datasets, we use their official training, development, and test data split[^4]. Note that, since the label of the test set of HMC is not publicly available, we follow existing studies (Radford et al., 2021; Goyal et al., 2022; Singh et al., 2022; Cao et al., 2023; Koutis et al., 2023), to evaluate all models on its development set. 3.2 Baselines In our experiment, we employ MiniGPT-4 as the backbone model (which is based on BLIP2 (Li et al., 2023)) for the English task, which is recognized as a prominent multimodal LLM with promising performance across numerous multimodal tasks. For Chinese, we use Ziya-BLIP2-Visual (Zhang et al., 2022) that employs the same architecture as MiniGPT-4. We also try settings with small language models, i.e., GPT-2 (Radford et al., 2019) following the same BLIP2 architecture. To compare with the proposed approach, we run experiments with the following three baseline models: (1) BLIP2, MiniGPT-4 and Ziya (Base), which is the original version of BLIP2, MiniGPT-4 or Ziya-BLIP2-Visual; (2) the Base+M model where the base models in (1) are enhanced by the proposed memory module, which includes visual information encoding, cross-modal memorizing, and LLM prompting, and the memory is optimized by the cross-entropy loss from comparing the model prediction and the gold standard for the task. (3) the Base+SRT model where we directly use self-rejection sampling to enhance the base models by concatenating visual and text features (i.e., $x_{vt}$) to form the correlation vector (i.e., $x_m$), and randomly set 33% values in $x_m$ to zero to facilitate self-rejection training. [^4]: We report the statistics of the dataset in Appendix A. Table 2: Comparison of our approach with existing studies on HMC. “△” marks our own runs of multi-modal systems with LLMs. We report the average performance of our approach only on the development set since the gold standard labels of the test set are not publicly available. | Method | ACC | AUROC | |-------------------------|------|-------| | Muenninghoff (2020) | - | 81.56 | | Velioğlu & Rose (2020) | 70.93| 75.21 | | Lippe et al. (2020) | - | 77.39 | | Radford et al. (2021) | - | 77.30 | | Goyal et al. (2022) | - | 73.40 | | Nandakumar (2022) | - | 81.55 | | Singh et al. (2022) | - | 76.70 | | Cao et al. (2023) | 72.98| 82.45 | | Koutlis et al. (2023) | 73.60| 80.10 | | Liu et al. (2023a) | 76.20| 84.57 | | Ours | 78.08| 86.84 | Table 3: Performance comparison of different models on the test set of Memeplate dataset. Scores marked by “⋆” and “†” are from Li et al. (2022) and our own runs on this dataset, respectively. “RB” stands for the RoBERTa model; “△” indicates that the multimodal models using LLMs to predict labels. | Method | ACC | F1 | |-------------------------|------|-------| | *RB + ResNet-50 | 51.08| 45.82 | | *RB + XCiT | 49.32| 46.18 | | *RB + BEiT | 52.30| 45.33 | | *RB + Faster-RCNN | 50.54| 43.31 | | Yang et al. (2022) | 52.57| 46.21 | | †△ Yang et al. (2023) | 55.43| 48.80 | | †△ Hu et al. (2023) | 55.08| 48.97 | | †△ University (2023) | 55.76| 49.49 | | Ours | 56.83| 50.34 | 3.3 IMPLEMENTATION DETAILS The HMD pipeline of our approach for HMC and Memeplate is based upon BLIP2, MiniGPT-4, and Ziya-BLIP2-Visual, utilizing 12, 32 and 40 layers of multi-head attentions with 1.5B, 7B and 13B parameters, respectively. Specifically, the visual encoding and LLM prompting processes in our approach follow the same procedures as that applied in these foundation models. The visual transformer and Q-Former in visual encoding consist of 40 and 12 transformer layers, respectively. In fine-tuning our approach, we alternate between the following two procedures for every 100 steps: (1) updating the parameters of different components in visual encoding, memory module, and LLM using the cross-entropy loss from comparing the predicted labels with gold standards and (2) updating the reward model and the memory module through self-rejection training. For evaluation, we follow existing studies (Kiela et al., 2020; Li et al., 2022; Cao et al., 2023; Koutlis et al., 2023) to use accuracy and AUROC for HMC while accuracy and F1 for Memeplate. We try a series of hyperparameter settings and select the one that yield the best performance on the development set in our final experiments. I.e., the numbers of memory vectors (i.e., \(N\)) for HMC and Memeplate are 200 and 150, respectively; the relevance memory size (i.e., \(M\)) and sampling time \(T\) are set to 20 and 4, respectively; the learning rate is \(1 \times 10^{-6}\) and the batch size is 32 for both datasets. We run baselines and our approach five times with different random seeds and record the average and standard deviation of model performance. 4 RESULTS AND ANALYSIS 4.1 OVERALL PERFORMANCE We run baselines and our approach on HMC and Memeplate datasets and report the average model performance with standard deviations in Table 1. There are following observations. First, our approach consistently outperforms baselines, which indicates the effectiveness of the proposed approach for HMD given that baseline models have already achieved promising performance. Second, adding memory module (i.e., “+M”) or self-rejection training (i.e., “+SRT”) leads to noticeable improvements over the “Base” model, which illustrates the effectiveness of individual modules to capture correlations between visual and text information to improve model performance. Third, compared with “+M”, “+SRT” presents higher performance over different settings, indicating the superiority of discriminative learning on task-specific information. Fourth, our full model with both the memory module and self-rejection training outperforms all baselines, demonstrating the necessity of combining them to further enhance HMD. We further compare our approach with existing studies and report the results for HMC and Memeplate in Table 2 and Table 3, respectively. It is observed that our approach outperforms previous studies on both datasets, especially the ones using powerful pre-trained multimodal models (Nandakumar, 2022; Singh et al., 2022; Koutlis et al., 2023). The reason behind the observation is that, --- 5For self-rejection training with the Base+SRT baseline, we update the parameters of visual encoding and the token embeddings of the input text, so as to deal with the situation of memory module absence. 6For HMC, we randomly select 10% of the training data and use it to tune hyper-parameters. Figure 3: Curves of model performance on the development set of HMC and the test set of Memeplate with respect to different numbers of memory vectors used in the memory module. | | HMC | Memeplate | |--------|----------------------|-------------------------| | | Dev | Dev | Test | | ACC | AUROC | ACC | F1 | ACC | F1 | | OP | 76.78±0.26 | 84.80±0.20 | 53.94±0.21 | 48.39±0.24 | 55.90±0.27 | 49.55±0.22 | | Co-Att | 76.96±0.22 | 84.91±0.23 | 54.57±0.18 | 48.50±0.22 | 55.90±0.24 | 49.19±0.21 | Table 4: Performance of different models with the memory module in our approach replaced by outer product operation (OP) and Co-attention mechanism (Co-Att). | | HMC | Memeplate | |--------|----------------------|-------------------------| | | Dev | Dev | Test | | ACC | AUROC | ACC | F1 | ACC | F1 | | M | 77.70±0.21 | 88.57±0.20 | 56.31±0.14 | 49.64±0.20 | 56.60±0.22 | 49.83±0.19 | | LoRA | 77.96±0.24 | 88.75±0.23 | 56.40±0.19 | 49.81±0.26 | 56.77±0.20 | 50.07±0.23 | Table 5: Performance of our approach with different fine-tuning strategies. “M” stands for the setting where we only fine-tune the memory module in the pipeline and fix the parameters in the visual encoding and the LLM; “LoRA” refers to that we use LoRA to fine-tune the LLM, where the parameters in the visual encoding and the memory module are also updated simultaneously. These multimodal models generally perform HMD in the same way as image captioning, which focuses on the content shared by image and text rather than their correlations that lead to other (i.e., hateful) information. On the contrary, our approach correctly distinguish such correlation with our particular model design so that leads to better performance. 4.2 Effect of the Memory Module The memory matrix in the proposed memory module represent the semantic space for the correlation between visual and text features for the specific task. Therefore, it is necessary to investigate the effect of the matrix, especially its vector numbers (i.e., $N$), on HMD performance. In doing so, we run experiments with different $N$ on HMC and Memeplate, where the curves of model performance with respect to $N$ are illustrated in Figure 3 with following observations. First, when $N$ is relatively small, increasing the number of memory vectors leads to noticeable improvements of model performance, which is not surprising since a smaller $N$ corresponds to a restricted space in capturing essential correlation information. Second, with $N$ grows, the performance converges, demonstrating that once the memory vectors cover enough information of cross-modal correlation that results in hateful information, adding more vectors has limited effect to further benefit HMD. In addition, to better illustrate the effect of memory module when it coordinates with self-rejection training, we run two additional approaches where the memory module in our approach is replaced by outer product operation (OP) and co-attention (Co-Att) \cite{lu2016hateful}. We report the experimental results of the two models in Table 4 and observe that the two models achieve worse performance compared with the “+SRT” baseline as well as our full model as that shown in Table 1, which demonstrates the effectiveness of our design with memory and self-rejection training. This observation further confirms the superiority of modeling task-specific correlation for HMD, since OP and Co-Att are widely used to align and fuse multimodal features in tasks such as image captioning and proved to be effective in modeling the semantics shared by multimodalities, which is different from the correlation information between visual and text features in our task. For the two additional models, in detail, for OP, we compute the outer product of the visual and text features, flatten the resulting matrix, and use the resulted vector as the correlation vector $\mathbf{x}_m$; for Co-Att, we utilize co-attention to fuse multimodal features and directly regard the output as the correlation vector. | ID | Dataset | Prompt | |----|---------|--------| | 1 | HMC | The meme is | | | Memeplate | The humor level of the meme is | | 2 | HMC | You are asked to predict whether the meme is hateful or non-hateful based on the given visual and text features. The meme is | | | Memeplate | You are asked to predict the humor level (ranging from 1-3) of the meme based on the given visual and text features. The humor level of the meme is | | 3 | HMC | Is the meme hateful or non-hateful? | | | Memeplate | What is the humor level of the meme? | | 4 | HMC | You are asked to predict whether the meme is hateful or non-hateful based on the given visual and text features. Is the meme hateful or non-hateful? | | | Memeplate | You are asked to predict the humor level (ranging from 1-3) of the meme based on the given visual and text features. What is the humor level of the meme? | Table 6: Prompts used to investigate the robustness of our approach. | Prompt ID | HMC | Memeplate | |-----------|-----|-----------| | | Dev | Dev | Test | | | ACC | AUROC | ACC | F1 | ACC | F1 | | 1 | 78.00±0.23 | 86.74±0.20 | 56.48±0.19 | 50.00±0.21 | 56.78±0.20 | 50.27±0.23 | | 2 | 77.96±0.20 | 84.64±0.21 | 56.60±0.22 | 50.13±0.18 | 56.80±0.21 | 50.21±0.20 | | 3 | 78.08±0.24 | 86.84±0.19 | 56.52±0.17 | 50.07±0.23 | 56.83±0.20 | 50.34±0.19 | | 4 | 77.92±0.19 | 86.68±0.21 | 55.42±0.23 | 50.10±0.20 | 56.90±0.23 | 50.38±0.21 | Table 7: Performance of our approach on HMC and Memeplate with the prompts from Table 6. ### 4.3 Effect of Fine-tuning Strategy Fine-tuning strategies have great impact in model training. To investigate the influence of different strategies, we experiment with two settings: (1) we only update the parameters in the memory module and fix those in the visual encoding and the LLM; (2) we use LoRA (Hu et al., 2021) to fine-tune the LLM and fine-tune all parameters in the visual encoding and the memory module. The results are reported in Table 5 where observations drawn as follows. First, when only the memory parameters are fine-tuned, there is a slight drop in performance compared with full parameter fine-tuning (see Table 1), owing to the reason of potential information mismatch among updated and fixed modules. However, the fact of slightly dropping is a further confirmation of our model design by indicating the power of memory compared with the non-memory baseline. Second, the performance of LoRA fine-tuning was comparable to the full-parameter fine-tuning, which demonstrates the robustness and flexibility of our approach working with various effective fine-tuning techniques. ### 4.4 Effect of Different Prompts Existing studies demonstrated that different designs on prompting have significant influences on LLM performance (Schick & Schütze, 2021; Liu et al., 2023b; White et al., 2023). Therefore, we analyze model performance with various prompts and provide insights on the robustness and generalization capabilities of our approach. In doing so, we try different prompts illustrated in Table 6 where the prompts differ from each other from the following two perspectives: (1) with and without task description in the prompt (i.e., prompt 2 vs. prompt 1 and prompt 4 vs. prompt 3), and (2) follow or do not follow question formats (i.e., prompt 3 vs. prompt 1 and prompt 4 vs. prompt 2). We report the performance of our approach with different prompts in Table 7 where our approach works well with various prompts and stabilizes on HMD results, demonstrating the robustness of applying LLM in our approach. ### 4.5 Case Study In addition to quantitative study, we investigate three similar memes for qualitative analysis. The memes and the prediction from different models, as well as the gold standard, are illustrated in Figure 4 where correct and incorrect predictions are highlighted in green and red colors, respectively. Herein, meme (a) and (b) share the same texts while meme (a) and (c) share the same visual content, leading to that (a) being the hateful one but (b) and (c) are not. By investigating the results, we observe that the three baselines struggle to consistently predict HMD correctly, whereas our full model --- 8 We present another case in Appendix B. Figure 4: Three memes and predictions of different models on them, with the gold standards also presented. Correct and incorrect labels are highlighted in green and red colors, respectively. is able to accurately identify all meme types. The reason is similar to that for analyzing OP and Co-Att replacement in §4.2, where the hateful information in memes generally derives from the correlation (i.e., the contradiction relationship in this case) between visual and text features rather than how well the image and text matches. The baselines have their limitations that prevent them from learning such correlation, either lacking particular mechanism to do so or being equipped without effective guidance. In contrast, the memory module and self-rejection training applied in our approach provide a comprehensive solution to learn, weight, and enhance such information so as to better identify hateful information in memes. 5 RELATED WORK HMD is a crucial task for safeguarding the digital sphere from harmful content, which is relevant to tasks such as meme emotion analysis, offensive meme detection, and harmful meme detection (Suryawanshi et al., 2020; Sharma et al., 2020; Pramanick et al., 2021a,b; Kocón et al., 2021; Sharma et al., 2022a,b; Hakimov et al., 2022). Although hateful memes are often conveyed by both images and texts, some early studies for HMD leverage unimodal approaches, where only one type of modality is used to detect them (Ren et al., 2015; He et al., 2016; Devlin et al., 2019; Kiela et al., 2021). Another stream of research introduces multimodal approaches that combine both image and text encoders for better results, where superior visual and text encoders (such as MMBT (Kiela et al., 2019), ViLBERT (Lu et al., 2019), VisualBERT (Li et al., 2019), CLIP (Radford et al., 2021), Flamingo (Alayrac et al., 2022), FLAVA (Singh et al., 2022), SLIP (Mu et al., 2022)) are used to extract features from images and text, respectively, then they further align or fuse multimodality features with a particular module or operation, such as vector concatenation and attentions (Goyal et al., 2022; Nandakumar, 2022; Koutlis et al., 2023; Hee et al., 2023). To further enhance HMD, model ensemble (Muenighoff, 2020; Lippe et al., 2020; Sandulescu, 2020), additional resources (e.g., extra training data and features) (Velioglu & Rose, 2020; Zhu, 2020), contrastive learning (Liang et al., 2022; Qu et al., 2023), and language model prompting (Cao et al., 2023) are employed to improve the ability to capture multimodality features, where limited attention is paid to model essential relationships between visual and text content that lead to hateful information. Compared with existing studies, our approach differs from them by approaching HMD through modeling task-specific correlation information rather than straightforwardly fusing and matching visual and text features. Particularly, the design of memory and self-rejection training provides an effective learning and optimization solution for such correlation information, showing their potential of being applied to a series of tasks with different nature of describing images such as captioning. 6 CONCLUSION In this paper, we propose an LLM-driven approach for HMD with cross-modal memorizing and self-rejection training, which learns and enhances the task-specific correlation information between visual and text features that result in hateful memes. Experimental results on English and Chinese benchmark datasets confirm the validity of the proposed approach, which outperforms strong baselines and existing studies and achieves state-of-the-art performance. Analyses further show that the combination of memory and self-rejection training demonstrates their superiority in learning such correlation between modalities, thereby proves the effectiveness of our model design for HMD. REFERENCES Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a Visual Language Model for Few-shot Learning. *Advances in Neural Information Processing Systems*, 35: 23716–23736, 2022. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language Models are Few-shot Learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Rui Cao, Roy Ka-Wei Lee, Wen-Haw Chong, and Jing Jiang. Prompting for Multimodal Hateful Meme Classification. *arXiv preprint arXiv:2302.04156*, 2023. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%+ ChatGPT Quality, March 2023. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, pp. 4171–4186, Minneapolis, Minnesota, June 2019. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. *ICLR*, pp. 1–21, 2021. Raul Gomez, Jaume Gibert, Lluis Gomez, and Dimosthenis Karatzas. Exploring Hate Speech Detection in Multimodal Publications. In *Proceedings of the IEEE/CVF winter conference on applications of computer vision*, pp. 1470–1478, 2020. Priya Goyal, Quentin Duval, Isaac Seessel, Mathilde Caron, Ishan Misra, Levent Sagun, Armand Joulin, and Piotr Bojanowski. Vision Models are More Robust and Fair when Pretrained on Uncurated Images without Supervision. *arXiv preprint arXiv:2202.08360*, 2022. Sherzod Hakimov, Gullal S Cheema, and Ralph Ewerth. TIB-VA at SemEval-2022 Task 5: A Multimodal Architecture for the Detection and Classification of Misogynous Memes. *arXiv preprint arXiv:2204.06299*, 2022. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 770–778, 2016. Ming Shan Hee, Wen-Haw Chong, and Roy Ka-Wei Lee. Decoding the Underlying Meaning of Multimodal Hateful Memes. *arXiv preprint arXiv:2305.17678*, 2023. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank Adaptation of Large Language Models. *arXiv preprint arXiv:2106.09685*, 2021. Jinyi Hu, Yuan Yao, Chongyi Wang, Shan Wang, Yinxu Pan, Qianyu Chen, Tianyu Yu, Hanghao Wu, Yue Zhao, Haoye Zhang, Xu Han, Yankai Lin, Jiao Xue, Dahai Li, Zhiyuan Liu, and Maosong Sun. Large Multilingual Models Pivot Zero-Shot Multimodal Learning across Languages. 2023. Douwe Kiela, Suvrat Bhoshan, Hamed Firooz, Ethan Perez, and Davide Testuggine. Supervised Multimodal Bitransformers for Classifying Images and Text. *arXiv preprint arXiv:1909.02950*, 2019. Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes. *Advances in neural information processing systems*, 33:2611–2624, 2020.
XUCAA0XnPC
Even if it does, how can the client know if the server is malicious in its part of the models as well. For instance, it could shuffle or modify the outputs of its server nets such that only some of them are useful for the client model.
ENSEMBLER: COMBATING MODEL INVERSION ATTACKS USING MODEL ENSEMBLE DURING COLLABORATIVE INFERENCE Anonymous authors Paper under double-blind review ABSTRACT Deep learning models have exhibited remarkable performance across various domains. Nevertheless, the burgeoning model sizes compel edge devices to offload a significant portion of the inference process to the cloud. While this practice offers numerous advantages, it also raises critical concerns regarding user data privacy. In scenarios where the cloud server’s trustworthiness is in question, the need for a practical and adaptable method to safeguard data privacy becomes imperative. In this paper, we introduce Ensembler, an extensible framework designed to substantially increase the difficulty of conducting model inversion attacks for adversarial parties. Ensembler leverages model ensembling on the adversarial server, running in parallel with existing approaches that introduce perturbations to sensitive data during collaborative inference. Our experiments demonstrate that when combined with even basic Gaussian noise, Ensembler can effectively shield images from reconstruction attacks, achieving recognition levels that fall below human performance in some strict settings, significantly outperforming baseline methods lacking the Ensembler framework. 1 INTRODUCTION In numerous critical domains, deep learning (DL) models have demonstrated exceptional performance when compared to traditional methods, including image classification [Deng et al., 2009; Dosovitskiy et al., 2021], natural language processing [Brown et al., 2020], protein predictions [Jumper et al., 2021], and more. One noteworthy trend accompanying these impressive advances is the escalating size of DL models employed for these tasks [Hu et al., 2021], with the famous GPT-3 model containing 175 billion parameters [Brown et al., 2020]. As a result, when tasks necessitate the involvement of edge devices such as mobile phones, reducing the computational workload on these devices becomes imperative. A prevalent approach involves offloading a substantial portion of the workload to a cloud server capable of executing extensive computations. This framework can be conceptualized as collaborative computing, where a client collaborates with a server offering computation-as-a-service (CaaS). Recently, some attention in the research community has been shifted to an emphasis on the privacy of client’s sensitive data in such a framework. While the client inherently trusts itself, the server may pose as an adversarial entity seeking to compromise the user’s privacy during the inference process. This risk becomes particularly pronounced when DL models are tasked with handling sensitive data, such as disease classification or facial authentication, which require access to medical or facial user information. In other scenarios, a client could be a small company that holds private models and uses the server solely for the purpose of providing service. It also does not want the server to access the data of its customers, which sometimes contains sensitive information. With the prevalence of edge computing, there is an increasing need for researchers to develop a machine learning framework that supports secure, accurate, and efficient machine learning service, and works in this area are often categorized under the term privacy-preserving machine learning (PPML). There have been multiple works addressing this formidable challenge of safeguarding the client’s sensitive information in collaborative inference scenarios, an important part of the entire PPML framework. For an extensive discussion on different algorithmic and architectural choices and their impacts on privacy protection, we refer readers to Section 5 and Table 2 of the comprehensive survey by (Xu et al., 2021). In this paper, we will simply group existing approaches into two categories: encryption-based algorithms that guarantee privacy at the cost of thousands of times of time efficiency (Mishra et al., 2020; Knott et al., 2021; Tan et al., 2021; Reagen et al., 2021; Rathee et al., 2020; Lam et al., 2023; Watson et al., 2022), and perturbation-based algorithms that operate on the intermediate layers of a DL architecture, introducing noise to thwart the adversary’s ability to recover client input (Mireshghallah et al., 2020; Osia et al., 2018; Lu et al., 2022; Sirichotedumrong & Kiya, 2021). Since perturbation-based algorithms directly operate on the intermediate outputs from the client, they incur minimal additional complexity during the inference process. However, as opposed to guaranteed privacy provided by encryption-based algorithms, perturbation-based algorithms suffer from the possibility of privacy leakage, meaning sensitive private information may still be recoverable by the adversarial server despite the introduced perturbations. He et al. (2019) presented one of the first systematic studies on model inversion attacks (MIA) on collaborative inference (CI). Their research shows that a shadow network can effectively emulate the client’s secret network, enabling the recovery of raw images, especially when the client retains only one single convolutional layer. While the client is able to keep more privacy as it keeps more layers, such a method is less practical in the real world due to the limitations of the computational power of edge devices. Mireshghallah et al. (2020) proposed Shredder, which uses a noise injection layer before the client sending out computed results to reduce mutual information between client and server while maintaining good classification accuracy. Nevertheless, Lu et al. (2022) demonstrated that Shredder falls short in safeguarding facial images from recovery. In our own experimentation with the noise injection layer proposed by Shredder, applied to a ResNet-18 architecture on CIFAR-10, we observed significant accuracy drops with combined multiplicative and additive noise. On the other hand, simple additive noise resulting in approximately a 5 percent drop in accuracy failed to protect images from recovery, as depicted in Figure 1. Lu proposed to use a policy-based processor between client and server to protect private information, but figures in their work seem to indicate that the effectiveness of their policy should be attributed to removing some regions from the original image that contain sensitive data. While such an approach is effective in some cases, it falls short in scenarios where sensitive information is embedded within the image, such as in facial authentication tasks. In this paper, we aim to bring forth these contributions to the research community. Firstly, we expand upon the systematic analysis of various model split strategies between the client and server, focusing on more complex architectures commonly used in practice. Second, we take a different path from those approaches that propose different modifications to the data and introduce Ensembler, a secure collaborative inference framework designed to substantially increase the effort required to recover client input. Ensembler is not only a stand-alone framework that significantly increases the adversary server’s reconstruction difficulty but can also be seamlessly integrated with existing complex algorithms to construct practical and secure inference architectures tailored to specific needs. The remainder of this paper is organized as follows: Section 2 introduces the background of collaborative inference and related works, as well as formally defining the threat model. Section 3 offers a systematic analysis of the impact of different model split strategies on server-side reconstruction difficulty. Section 4 introduces Ensembler and details its design for better secure collaborative inference. Section 5 presents the empirical experiments related to Ensembler and showcases its effectiveness in protecting the client’s private data, and Section 6 concludes the paper. 2 BACKGROUND 2.1 COLLABORATIVE MACHINE LEARNING The development of mobile graphic processing units (GPUs) has ushered in a new era where machine learning tasks are increasingly deployed with a portion of the computation being handled by edge devices. Related areas include federated learning, where multiple edge devices jointly train a deep learning model (McMahan et al., 2017; Yu et al., 2023; Yaldiz et al., 2023); split learning, where a DL model is split into two or more parts, and the client and server jointly train it (Poirot et al., 2019); and collaborative inference, where a DL model is split, with only a portion deployed on the server to provide services (He et al., 2019; Osia et al., 2018). In this paper, we will focus on the inference part and assume that the training phase of DL models is secure. Though the training phase is sometimes also susceptible to adversarial attacks aimed at stealing sensitive information (Inan et al., 2021; Li et al., 2022; Zhang et al., 2021), private inference is still more prevalent in most practical scenarios. ### 2.2 Threat Model In this paper, we consider the collaborative inference task between the client and the server, who acts as a semi-honest adversarial attacker that aims to steal the raw input from the client. Formally, we define the system as a collaborative inference on a pre-trained DNN model, $M(x, \theta)$, where the client holds the first and the last a few layers (i.e. the “head” and “tail” of a neural network), denoted as $M_{c,h}(x, \theta_{c,h})$ and $M_{c,t}(x, \theta_{c,t})$. The rest of the layers of DNN are deployed on the server, denoted as $M_s(x, \theta_s)$. $\theta$ is the trained weights of $M$, where $\theta = \{\theta_{c,h}, \theta_s, \theta_{c,t}\}$. The complete collaborative pipeline is thus to make a prediction of incoming image $x$ with $M_{c,t}[M_s[M_{c,h}(x)]]$. During the process, the server has access to $\theta_s$ and the intermediate output $M_{c,h}(x)$. In addition, we assume that it has a good estimate of the DNN used for inference. That is, it has auxiliary information on the architecture of the entire DNN, as well as a dataset in the same distribution as the private training dataset used to train the DNN. However, it does not necessarily know the hyper-parameters, as well as engineering tricks used to train the model. Since the server is a computation-as-a-service (CaaS) provider, it is assumed to have reasonably large computation resources. While it is powerful in computing, the server is restricted from querying the client to receive a direct relationship between raw input $x$ and intermediate output $M_{c,h}(x)$. In order to reconstruct raw input $x$ from the intermediate output $M_{c,h}(x)$, the server adopts a common model inversion attack (He et al., 2019; Lu et al., 2022; Dosovitskiy & Brox, 2016). It constructs a shadow network $\tilde{M}(x, \theta_{c,h}, \theta_s, \theta_{c,t}) : \{M_{c,h}, M_{server}, M_{c,t}\}$ such that $\tilde{M}$ simulates the behavior of $M$. After training $\tilde{M}$, the adversarial server is able to obtain a representation $\tilde{M}_{c,h}$ such that $\tilde{M}_{c,h}(x) \sim M_{c,h}(x)$. As the next step, with the help a decoder of $\tilde{M}_{c,h}$ to reconstruct the raw image from intermediate representation, it is able to reconstruct the raw input from $M_{c,h}(x)$. ### 2.3 Assumptions of Other Related Works In this section, we provide an overview of various attack models and the assumptions adopted in other works related to collaborative inference (CI) under privacy-preserving machine learning (PPML). Since different works potentially use different collaboration strategies between the client and the server, we will use the generic notation, where $M_c$ is held by the client, and $M_s$ is held by the server. Generally, the attacks from the server will fall into three categories: - **Training Dataset Reconstruction Attacks** that try to predict if certain attributes, including but not limited to individual samples, distributions, or certain properties, are a member of the private training set used to train $M(x, \theta)$. If successful, the privacy of the training dataset will be compromised. We refer readers to the survey by Hu et al. (2022) and Salem et al. (2023) for more details. - **Model Inversion Attacks** that try to recover a particular input during inference when its raw form is not shared by the client. For example, in an image classification task, the client may want to split $M$ such that it only shares latent features computed locally to the server. However, upon successful model inversion attacks, the server will be able to generate the raw image for classification tasks based on the latent features. It is important to note that, in this paper, we adopt the same definition of model inversion attacks as of He et al. (2019). This term also refers to attacks that reconstruct the private training dataset in other works. We will focus on reconstructing private raw input for the rest of the paper. • **Model Extraction Attacks** that try to steal the parameters and even hyper-parameters of M. This type of attacks compromise the intellectual property of the private model and are often employed as sub-routines for model inversion attacks when the server lacks direct access to M’s parameters. Different works also make different assumptions on the capability of the server. First, it is widely-accepted that the server has sufficiently yet reasonably large computing power and resources, as its role is often providing ML service. Regarding the auxiliary information on M, they generally fall into three levels: • **White Box** assumes that the server has full access of architecture details of M such as the structure and parameters (Liu et al., 2021). Different definitions also add different auxiliary information available to the server, such as training dataset (Liu et al., 2021), corrupted raw input (Zhang et al., 2020), or a different dataset (Wang & Kurz, 2022). This setting is often associated with attacks that try to reconstruct private training dataset (Wang & Kurz, 2022; Zhang et al., 2020; Haim et al., 2022). • **Black Box** assumes that the server does not have any information of neither M nor training dataset. However, it is allowed to send unlimited queries to the client to get $M_c(x)$ (Xu et al., 2023; Kahla et al., 2022). • **Query-Free** restricts the server from querying $M_c$. While such an assumption greatly limits the reconstruction ability of the adversarial party, there are no limitations on auxiliary information available to the server besides the actual weights of $M_c$. (He et al., 2019; Ding et al., 2023) have both shown that $M_c$ is still vulnerable of leaking private information of the raw input when the server has information of the model architecture and training dataset. Our work will adopt this setting. ### 3 ANALYSIS ON SPLITTING BETWEEN CLIENT AND SERVER Previous work from (He et al., 2019) provided a systematic analysis on the recovery difficulty and quality of the above mentioned model-inversion attack. Their work analyzed effects on reconstruction quality from loosening assumptions of auxiliary information available to the server (DNN architecture and training dataset), as well as choosing different split points (h) between the client and the server. However, their work was based on a simple 6-layer convolutional neural network (CNN), which is seldom used in today’s service. In this section, we further their analysis upon more practical architectures, namely ResNet-18 and VGG-16. One of the important findings from He et al. (2019); Ding et al. (2023)’s study is that increasing the depth (h) of $M_{c,h}$ will lead to worse image reconstruction quality of the adversarial attacker in MIA. At the same time, part of Zhou et al. (2022)’s algorithm lets the client, instead of the server, compute the Softmax function of $M(x, \theta)$ at the last layer. The success of their algorithm raises the possibility of utilizing a second split point to enhancing privacy protection. Under the threat model defined by Section 2.2, we provide visual evaluations of the quality of reconstructed images from MIA, as shown by Fig. 2 and 3. ![Figure 2: Effect of first and second split points on VGG16. The vertical axis is the first split point in terms of layers, and the horizontal axis is the second split point counting backwards on layers.](image-url) The vertical axis is the position of first split point, and the horizontal axis is the position of second split point counting backwards. For VGG-16 architecture, the first h layers of $M$ belongs to the client. For the ResNet-18 architecture with 4 blocks, $h$ represents the number of residual blocks computed by the client, with $h=1$ being the client only computing the first convolutional layer. As shown in the figures, our experiments align with the results from He et al. (2019) and Ding et al. (2023). The deeper the first split point is, the worse the reconstructed image is. However, the experiments do not support the idea from Zhou et al. (2022). The second split point does not increase the difficulty of reconstruction under MIA. It is also noteworthy to point out that while our experiments indicate that image reconstruction quality is below human-level recognition after $h=6$ for VGG-16 and $h=2$ for ResNet-18, this should not be treated as a privacy-guarantee. This is because we are using a standard decoder for $\tilde{M}_{c,h}(x, \theta_{c,h})$, whereas there exist more powerful generative decoders that could do potentially better at reconstructing images (Khosravy et al., 2022). At the same time, this reconstruction depends on the task. For example, Lu et al. (2022) is able to reconstruct high-quality facial images with larger $h$, and Ding et al. (2023) is more successful with vehicle reconstruction. We also provide a brief experiment of MIA on NLP task in the Appendix A.1. 4 Ensembler ARCHITECTURE While it is possible to protect sensitive data via increasing the depth ($h$) as shown by the previous section, such depth is often impractical for edge devices due to the computational demands involved. In this section, we present Ensembler, a framework that augments the privacy of intermediate information sent by the client without requiring extra computation efforts of the client during inference. Ensembler is highly extensible, and it is compatible with existing works that apply noises and perturbation during both DNN training and inference. We will go over the detailed architecture in Section 4.1, as well as the training stage of this new framework in Section 4.2. 4.1 ARCHITECTURE OVERVIEW As illustrated in Fig. 4, Ensembler leverages model ensembling on the server to generate a regularized secret $M_{c,h}$ that is hard to be reconstructed by the server. It consists of three parts: standard client layers, $N$ different server nets, and a selector. During the collaborative inference pipeline, the client computes $M_{c,h}(x)$ and transmits the intermediate output to the server. The server then feeds the intermediate output through each of the $M^i_s$, and reporting the output of each $M^i_s$ to the client. The client then employs a selector to perform a selection of the feedback from the server, which activates results of $P$ out of $N$ nets and combines them. As a final step, it performs the computation of the last $t$ layers to classify the input. We will introduce these separately in this section. 4.1.1 CLIENT LAYERS During collaborative inference, a part of the DNN is run by the client. Under the proposed framework, the client is responsible for running the first $h$ layers $M_{c,h}$ and the last $t$ layers $M_{c,t}$. These layers are the same as the client part of a typical collaborative inference framework. $M_{c,h}$ takes the raw input (often an image) and outputs the intermediate result, whereas $M_{c,t}$ takes the output from the server as input and outputs the likelihood of each class. Figure 4: Illustration of the proposed architecture, Ensembler. Different from the traditional CI pipelines, it deploys N neural networks on the server, and uses a selector to activate P of the N nets. 4.1.2 Server Nets On the server side, the network is consisted of N copies of DNN, with each $M_s^i$ corresponding to what the server would normally process in a typical collaborative inference pipeline. That is, each $M^i : \{M_{c,h}, M_s^i, M_{c,t}\}$ is a valid pipeline for the inference task. Upon receiving $M_{c,h}(x)$, which is the input from the client, the server shall feed this input into each of $M_s^i$, and it outputs N representations of hidden features used for classification. 4.1.3 Selector To increase the difficulty of the server reconstructing the model and recovering the raw input, a selector is applied before the last layer run by the client. The selector serves as a secret activation function, which activates P of the N nets according to Equation (1), where $S_i$ is the activation from selector, and $\odot$ is the element-wise multiplication. For simplicity, we consider $S_i = 1/P$ if $M_s^i$ is selected by the client, an $S_i = 0$ otherwise. $$\text{Selector}[M_s(x)] = \text{Concat}[S_1 \odot M_s^1(x), S_2 \odot M_s^2(x), ..., S_N \odot M_s^N(x)]$$ (1) 4.2 Training Stage As mentioned above, the design choices of Ensembler aim to achieve a regularized $M_{c,h}$ such that a shadow network based on $M_s$ would be an incorrect estimate of $M_{c,h}$. To achieve this goal, the proposed architecture uses a two-staged training pipeline. For the first stage, it needs to obtain N distinct $M^i(x, \theta^i) : \{M_{c,h}^i, M_s^i, M_{c,t}^i\}$ such that a shadow network that accurately simulates $M_{c,h}^i$ could not simulate $M_{c,h}^j$. In our approach, we choose to simply introduce a Gaussian noise layer after the intermediate output $M_{c,h}^i(x)$. The objective function in this stage is to minimize the cross-entropy loss in the form of Equation (2), where $N(0, \sigma^i)$ is a fixed Gaussian noise added to the intermediate output. The choice of $\sigma$ is dependent on the quality of training, and given the inherent redundancy in the parameters of DNNs, adding some noises will not affect the classification accuracy. For example, adding noise of $N(0, 0.1)$ after the first layer of a ResNet-18 architecture for CIFAR-10 image classification task results in less than 1% accuracy loss. We choose Gaussian noise because of simplicity in implementation, and we’d argue that any method that will lead to distinctive $M_{c,h}$ will be sufficient for this step. However, this step is nonetheless needed to ensure that each model has different parameter weights. Otherwise, all N models would be identical to each other, and the framework fails its purpose in protecting privacy. $$L_{\theta}^i = -\sum_j y_j \ast \log M_{c,t}^i(M_s^i[M_{c,h}^i(x) + N(0, \sigma^i)])_j$$ (2) After the first training stage, N different DNNs are obtained. The proposed framework selects P of the N nets, and re-trains an “ensemblized” network, Ensembler, which has been outlined in the previous section. During the training, parameters of $M_s$ are frozen. This step is used to ensure the performance of the model during inference. While the training process is just like any typical neural network, it is noteworthy to point out that we add a regularization term to the standard cross-entropy loss to enforce $M$ to learn a joint $M_{c,h}$ and $M_{c,t}$ representation from all of the P server nets. The custom loss function, as shown in Equation (3), adds a high penalty to the model if the gradient descends only to the direction of some single server net $M^i_s$. In the equation, CS is the cosine similarity, and $\lambda$ is a hyper-parameter controlling the regularization strength. Since this is an end-to-end training process, any perturbation-based algorithms could be seamlessly combined with the proposed framework during this step to provide further privacy protection. For our experiment, we just choose simple Gaussian noises to be consistent with the first step. $$L_\theta = -\sum_{i=1}^{N} \sum_{j} [y_j \ast \log M_{c,t}(Selector[M^i_s|M_{c,h}(x) + N(0, \sigma)])_j]$$ $$+ \lambda \max_{i \in P} [CS(M_{c,h}(x), M^i_{c,h}(x))]$$ (3) 4.3 Intuition behind Ensembler In this section, we discuss the intuition behind the proposed architecture. Since the attacker will construct shadow networks to simulate the behavior of client’s private networks, the exact purpose of the two-staged training algorithm is to ensure that the attacker is not able to learn the selector with its shadow network. Through the first stage of training, N different models that have distinctive weights are obtained, yet all of them are able to make comparative predictions on the dataset. An arbitrary ensemble of P out of the N networks will form a new network, whose $M_{c,h}$ will be distinctive from networks under a different combination. That is, since $M^i_s+j$ would be different from $M^i_s+k$, $M^i_{c,h}$ obtained from $M^i_s+j$ would be different from $M^i_s+k$ obtained from $M^i_{c,h}$, where $+$ is ensemble of server nets. Thus, with N networks in the first stage of the algorithm, we will have $2^N$ different possible $M_{c,h}$ that could be the valid answer to the shadow network. When the attacker tries to train an adaptive attacker, the shadow network will learn an arbitrary representation $M_{c,h}$ and an arbitrary $S$. Such combination is a valid choice in terms of classification accuracy but is nonetheless incorrect compared to the actual $M_{c,h}$. 4.4 Time complexity of Ensembler From previous section, it is clear that the time complexity of the proposed framework is N times of the individual network on a single-core GPU, and there is negligible extra communication cost between the client and the server. However, it is worthy to emphasize that since each $M^i_s$ is independent to each other, the proposed framework is friendly to parallel execution and even multiparty (multi-server) inference. Under those settings, the theoretical time complexity of N would be replaced with lower practical time costs or even causes the framework to be uninvertible. On the other hand, since the server is not able to adaptively learn the client’s representation, the only option is to exhaustively try all combinations, which takes $2^N$ times compared to reconstructing a single network. Here, we provide a semi-formal argument on exponential complexity of reconstructing the best quality image under Ensembler protection. **Lemma 1** Reconstructing image from single neural network $M^i_s$ is not viable. For any shadow network obtained through single $M^i_s$, it needs to first simulate the behavior of $M^i_{c,h}$. In this case, if there exists some $M^i_{c,h}$ that simulates $M_{c,h}$, the training loss of the second training phrase is not optimized (Equation (3)) due to the regularization term. **Lemma 2** Reconstructing image from incorrect choice of $M_{activated} = [M^i_s, ..., M^j_s]$ is not viable. Since $g_i \in N(0, \sigma)$ are independent of each other, the N different $M^i(x, \theta^i)$ obtained in the first training stage are also distinctive. Including incorrect $M^i_s$ in the shadow network construction will lead to the model regularizing in an incorrect direction. Conclusion The time complexity of reconstructing best quality input from N server nets is theoretically $2^N - 1$. 5 EXPERIMENTS AND EVALUATIONS 5.1 ARCHITECTURE DETAILS During the experiment, we consider the most strict setting, where h=1 and t=1 on a ResNet-18 architecture for three image classification tasks, CIFAR-10, CIFAR-100, and a subset of CelebA-HQ [Zhu et al., 2022]. That is, the client only holds the first convolutional layer as well as the last fully-connected layer, which is also the minimum requirement for our framework. For CIFAR-10, the intermediate output’s feature size is [64x16x16], for CIFAR-100, we remove the MaxPooling layer and the intermediate output’s feature size is [64x32x32], and for CelebA, the intermediate output’s feature size is [64x64x64]. We consider the ensembled network to contain 10 neural networks (N=10), each being a ResNet-18. The selector secretly selects \{4,3,5\} out of the 10 nets (P=\{4,3,5\}), respectively. The adversarial server is aware of the architecture and the training dataset. It constructs a shadow network $\hat{M}_{c,h}$ consisted of three convolutional layers with 64 channels each, with the first one simulating the unknown $M_{c,h}$, and the other two simulating the Gaussian noise added to the intermediate output. It also has $\hat{M}_{c,t}$ with the same shape as $M_{c,t}$. For adaptive shadow network, it learns from all 10 server nets with an additional activation layer that is identical to the selector. For any noises added to the intermediate outputs during the training and inference stage, we consider a fixed Gaussian noise $g \sim N(0, 0.1)$. 5.2 EXPERIMENT SETUP To evaluate the effectiveness of our approach, we employ three key metrics: Structural Similarity (SSIM), Peak Signal to Noise Ratio (PSNR), and visual assessment. The first two metrics offer quantitative evaluations of the reconstruction quality of MIA, with higher SSIM and PSNR values indicating better reconstruction quality. As our proposed architecture operates in parallel with existing perturbation methods, we consider the following baseline approaches for comparison on CIFAR-10: no protection (NONE), adding small noise in a single network that does not require retraining (Shredder [Mireshghallah et al., 2020]), adding large noise and retrain a single network (Single), and adding dropout layer in the single network or ensembled network, but with only one round of training (DR-single and DR-ensemble). The dropout is included to differentiate our architecture with dropout layers, as the selector component does look very similar to a dropout layer. For the other two datasets, we select some of the important benchmark for comparison. For CelebA-HQ, since the intermediate output’s feature size is too large for the simple Gaussian filter to be visually effective, we add an untrained random $M_{c,h}$ (Random) to illustrate the maximum capacity of Gaussian filter at the cost of accuracy. For the proposed architecture, we evaluate the performance of both reconstruction of a single neural network (N=1), as well as reconstruction using the entire network (Adaptive). For reconstruction of ensembled nets using a single neural network, we report the best reconstruction result of the N nets. For Section 3, we implement the experiments on a server with four A-6000 GPUs using Python and PyTorch. For Section 4, we used a mixture of the server and Google Colab, which uses one T4 GPU. 5.3 COMPARISON OF RESULTS We provide the quantitative evaluations for CIFAR-10 in Table. 1 and the visual assessments in Figure. 5 in Appendix A.2.1. It could be seen that the proposed framework significantly increases the reconstruction difficulty of the adversarial party. Ensembler incurs 2.13% drop in classification accuracy compared to the model without any protection, which is marginal compared to its advantage in protecting privacy of the client’s raw input. From the figure, it is clear that the reconstructed images are hardly recognizable by human-level interpretations. In addition, we provide the quantitative evaluations for CIFAR-100 and CelebA-HQ in Table. 2 and 3 and the visual assessments in Appendix A.2.2 and A.2.3. The proposed framework remains effective when the feature size increases. In particular, the framework safeguards the model’s prediction ability while protecting the input images on par with the random head network. Although the visual assessments show that increasing feature size leads to better visual recognition, we argue that it is inevitable with simple Gaussian noises. In particular, the shadow network is able to raise the reconstruction quality of a totally mismatched random $M_{c,h}$ to beyond human-recognition level from the shadow network with best PSNR. Table 1: Quantitative evaluations of the different defense mechanisms with CIFAR-10. Last three are the proposed framework. For SSIM and PSNR, lower values mean worse reconstruction quality. | Name | Change in accuracy | SSIM | PSNR | |--------------------|-------------------|--------|---------| | NONE | 0.00% | 0.4363 | 12.2678 | | Shredder | -5.68% | 0.5359 | 10.4033 | | Single | 2.15% | 0.3921 | 7.5266 | | Dr-single | 2.70% | 0.3453 | 6.6674 | | Dr-ensemble (best SSIM) | 1.42% | 0.373 | 7.3493 | | Dr-ensemble (best PSNR) | 1.42% | 0.3232 | 7.9598 | | Adaptive | -2.13% | 0.0555 | 5.981 | | N=1 (best SSIM) | -2.13% | 0.2889 | 4.865 | | N=1 (best PSNR) | -2.13% | 0.2221 | 5.5348 | Table 2: Quantitative evaluations of the different defense mechanisms with CIFAR-100. Last two are the proposed framework. For SSIM and PSNR, lower values mean worse reconstruction quality. | Name | Change in accuracy | SSIM | PSNR | |--------------------|-------------------|--------|---------| | Single | -0.97% | 0.4558 | 8.5225 | | Adaptive | 0.31% | 0.0864 | 4.7715 | | N=1 (best SSIM&best PSNR) | 0.31% | 0.2636 | 5.0741 | Table 3: Quantitative evaluations of the different defense mechanisms with CelebA-HQ [Zhu et al., 2022]. Last two are the proposed framework. For SSIM and PSNR, lower values mean worse reconstruction quality. | Name | Change in accuracy | SSIM | PSNR | |--------------------|-------------------|--------|---------| | Single | -1.24% | 0.2650 | 14.3126 | | Random (best SSIM&best PSNR) | -65.19% | 0.1387 | 12.8150 | | Adaptive | 2.39% | 0.0897 | 13.3698 | | N=1 (best SSIM&best PSNR) | 2.39% | 0.1791 | 12.0645 | 6 CONCLUSION In this paper, we present two contributions to the research community of PPML and collaborative inference. First, we extend the discussion on choosing the split points between client and server under collaborative inference. Our experiments illuminate that deeper split points yield lower-quality reconstructions, while the introduction of a second split point offers little to no improvement. Furthermore, we introduce a novel framework, Ensembler, designed to significantly increase the complexity of reconstruction for adversarial parties. Ensembler seamlessly aligns with existing methods that introduce diverse forms of noise to intermediate outputs, potentially yielding robust and adaptable architectures if combined with them. Our experiments highlight the substantial deterioration in reconstruction quality for images safeguarded by Ensembler when compared to those without its protection. REFERENCES Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 1877–1901. Curran Associates, Inc., 2020. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255, 2009. doi: 10.1109/CVPR.2009.5206848. Shiwei Ding, Lan Zhang, Miao Pan, and Xiaoyong Yuan. Patrol: Privacy-oriented pruning for collaborative inference against model inversion attacks, 2023. Alexey Dosovitskiy and Thomas Brox. Inverting visual representations with convolutional networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4829–4837, 2016. doi: 10.1109/CVPR.2016.522. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=YicbFdNTTy. Niv Haim, Gal Vardi, Gilad Yehudai, michal Irani, and Ohad Shamir. Reconstructing training data from trained neural networks. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=Sxk8Bse3RKO. Zecheng He, Tianwei Zhang, and Ruby B. Lee. Model inversion attacks against collaborative inference. In Proceedings of the 35th Annual Computer Security Applications Conference, ACSAC ’19, pp. 148–162, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450376280. doi: 10.1145/3359789.3359824. URL https://doi.org/10.1145/3359789.3359824. Hongsheng Hu, Zoran Salcic, Lichao Sun, Gillian Dobbie, Philip S. Yu, and Xuyun Zhang. Membership inference attacks on machine learning: A survey. ACM Comput. Surv., 54(11s), sep 2022. ISSN 0360-0300. doi: 10.1145/3523273. URL https://doi.org/10.1145/3523273. Xia Hu, Lingyang Chu, Jian Pei, Weiqing Liu, and Jiang Bian. Model complexity of deep learning: A survey. Knowl. Inf. Syst., 63(10):2585–2619, oct 2021. ISSN 0219-1377. doi: 10.1007/s10115-021-01605-0. URL https://doi.org/10.1007/s10115-021-01605-0. Huseyin A. Inan, Osman Ramadan, Lukas Wutschitz, Daniel Jones, Victor Ruhle, James Withers, and Robert Sim. Privacy analysis in language models via training data leakage report. CoRR, abs/2101.05405, 2021. URL https://arxiv.org/abs/2101.05405. John M. Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Zidek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A A Kohl, Andy Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Jonas Adler, Trevor Back, Stig Petersen, David A. Reiman, Ellen Clancy, Michal Zielinski, Martin Steinegger, Michalina Pacholska, Tamas Berghammer, Sebastian Bodenstein, David Silver, Oriol Vinyals, Andrew W. Senior, Koray Kavukcuoglu, Pushmeet Kohli, and Demis Hassabis. Highly accurate protein structure prediction with alphafold. Nature, 596:583 – 589, 2021. URL https://api.semanticscholar.org/CorpusID:235959867. M. Kahla, S. Chen, H. Just, and R. Jia. Label-only model inversion attacks via boundary repulsion. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15025–15033, Los Alamitos, CA, USA, jun 2022. IEEE Computer Society. doi: 10.1109/CVPR52688.2022.01462. URL https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01462.
PczQtTsTIX
At the end of the introduction, you say that your success with batch norm “contradicts” another paper [2] that did not find batch norm to work well. Why do you think you were able to achieve better results? Is it because you removed the target network, or is there another reason?
CrossQ: Batch Normalization in Deep Reinforcement Learning for Greater Sample Efficiency and Simplicity Aditya Bhatt*1,4 Daniel Palenicek*1,2 Boris Belousov1,4 Max Argus3 Artemij Amiranashvili3 Thomas Brox3 Jan Peters1,2,4,5 *Equal contribution 1Intelligent Autonomous Systems, TU Darmstadt 2Hessian.AI 3University of Freiburg 4German Research Center for AI (DFKI) 5Centre for Cognitive Science, TU Darmstadt aditya.bhatt@dfki.de, daniel.palenicek@tu-darmstadt.de Abstract Sample efficiency is a crucial problem in deep reinforcement learning. Recent algorithms, such as REDQ and DroQ, found a way to improve the sample efficiency by increasing the update-to-data (UTD) ratio to 20 gradient update steps on the critic per environment sample. However, this comes at the expense of a greatly increased computational cost. To reduce this computational burden, we introduce CrossQ: A lightweight algorithm for continuous control tasks that makes careful use of Batch Normalization and removes target networks to surpass the current state-of-the-art in sample efficiency while maintaining a low UTD ratio of 1. Notably, CrossQ does not rely on advanced bias-reduction schemes used in current methods. CrossQ’s contributions are threefold: (1) it matches or surpasses current state-of-the-art methods in terms of sample efficiency, (2) it substantially reduces the computational cost compared to REDQ and DroQ, (3) it is easy to implement, requiring just a few lines of code on top of SAC. 1 Introduction Sample efficiency is a crucial concern when applying Deep Reinforcement Learning (Deep RL) methods on real physical systems. One of the first successful applications of Deep RL to a challenging problem of quadruped locomotion was achieved using Soft Actor-Critic (SAC, Haarnoja et al. (2018a)), allowing a robot dog to learn to walk within 2h of experience (Haarnoja et al., 2018b). Subsequently, it was noted that the critic in SAC may be underfitted, as only a single gradient update step on the network parameters is performed for each environment step. Therefore, Randomized Ensembled Double Q-Learning (REDQ, Chen et al. (2021)) was proposed, which increased this number of gradient steps, termed update-to-data (UTD) ratio. In addition, Dropout Q functions (DroQ, Hiroyaka et al. (2021)) improved the computational efficiency of REDQ while maintaining the same sample efficiency by replacing its ensemble of critics with dropout. This enabled learning quadruped locomotion in a mere 20min (Smith et al., 2022). Thus, REDQ and DroQ represent the state-of-the-art in terms of sample efficiency in Deep RL for continuous control. Importantly, both REDQ and DroQ showed that naively increasing the UTD ratio of SAC does not perform well due to the critic networks’ Q value estimation bias. Therefore, ensembling techniques were introduced for bias reduction (explicit ensemble in REDQ and implicit Figure 1: CrossQ training performance aggregated over environments. CrossQ is more sample efficient (top) while being significantly more computationally efficient (bottom) in terms of the gradient steps, thanks to a low UTD = 1. Following Agarwal et al. (2021), we normalize performance by the maximum of REDQ in each environment. ensemble via dropout in DroQ), which allowed increasing the UTD to 20 critic updates per environment step. Higher UTD ratios improve sample efficiency by paying the price of increased computational cost, which manifests in higher wallclock time and energy consumption. It is, therefore, desirable to seek alternative methods that achieve the same or better sample efficiency at a lower computational cost, e.g., by using lower UTDs. It turns out that even UTD = 1 can perform surprisingly well if other algorithmic components are adjusted appropriately. In this paper, we introduce CrossQ, a lightweight algorithm that achieves superior performance by removing much of the algorithmic design complexity that was added over the years, culminating in the current state-of-the-art methods. First, it removes target networks, an ingredient widely believed to slow down training in exchange for stability (Mnih et al., 2015; Lillicrap et al., 2016; Kim et al., 2019; Fan et al., 2020). Second, we find that Batch Normalization variants (Ioffe & Szegedy (2015); Ioffe (2017)), when applied in a particular manner, effectively stabilize training and significantly improve sample efficiency. This contradicts others’ observations that it hurts the learning performance in Deep RL, e.g. Hiraoka et al. (2021). Third, CrossQ uses wider critic layers, motivated by prior research on the ease of optimization of wider networks (Ota et al., 2021). In addition to the first two improvements, wider networks enable even higher returns. Contributions. (1) We present the CrossQ algorithm, which matches or surpasses the current state-of-the-art for model-free off-policy RL for continuous control environments with state observations in sample efficiency while being multiple times more computationally efficient; (2) By removing target networks, we are able to successfully accelerate off-policy Deep RL with BatchNorm; (3) We provide empirical investigations and hypotheses for CrossQ’s success. CrossQ’s changes mainly pertain to the deep network architecture of SAC; therefore, our study is chiefly empirical: through a series of ablations, we isolate and study the contributions of each part. We find that CrossQ matches or surpasses the state-of-the-art algorithms in sample efficiency while being up to 4× faster in terms of wallclock time without requiring critic ensembles, target networks, or high UTD ratios. We provide the CrossQ source code at github.com/adityab/CrossQ. 2 BACKGROUND 2.1 Off-policy Reinforcement Learning and Soft Actor-Critic We consider a discrete-time Markov Decision Process (MDP; Puterman (2014)), defined by the tuple \((S, A, P, R, \rho, \gamma)\) with state space \(S\), action space \(A\), transition probability \(s_{t+1} \sim P(\cdot|s_t, a_t)\), reward function \(r_t = R(s_t, a_t)\), initial state distribution \(s_0 \sim \rho\) and discount factor \(\gamma \in [0, 1)\). RL describes the problem of an agent learning an optimal policy \(\pi\) for a given MDP. At each time step \(t\), the agent receives a state \(s_t\) and interacts with the environment according to its policy \(\pi\). We focus on the Maximum Entropy RL setting (Ziebart et al., 2008), where the agent’s objective is to find the optimal policy \(\pi^*\), which maximizes the expected cumulative reward while keeping the entropy \(H\) high: \[ \text{arg max}_{\pi^*} \mathbb{E}_{s_0 \sim \rho} \left[ \sum_{t=0}^{\infty} \gamma^t (r_t - \alpha H(\pi(\cdot|s_t))) \right]. \] The action-value function is defined by \(Q(s, a) = \mathbb{E}_{\pi, P} \left[ \sum_{t=0}^{\infty} \gamma^t (r_t - \alpha \log \pi(a_t|s_t)) | s_0 = s, a_0 = a \right]\) and describes the expected reward when taking action \(a\) in state \(s\). Soft Actor-Critic (SAC, (Haarnoja et al., 2018a)) is a popular algorithm that solves the MaxEnt RL problem. SAC parametrizes the Q function and policy as neural networks and trains two independent versions of the Q function, using the minimum of their estimates to compute the regression targets for Temporal Difference (TD) learning. This clipped double-Q trick, originally proposed by Fujimoto et al. (2018) in TD3, helps in reducing the potentially destabilizing overestimation bias inherent in approximate Q-learning (Hasselt, 2010). 2.2 High update-to-data Ratios, REDQ, and DroQ Despite its popularity among practitioners and as a foundation for other more complex algorithms, SAC leaves much room for improvement in terms of sample efficiency. Notably, SAC performs exactly one gradient-based optimization step per environment interaction. SAC’s UTD = 1 setting is analogous to simply training for fewer epochs in supervised learning. Therefore, in recent years, gains in sample efficiency within RL have been achieved through increasing the UTD ratio (Janner et al., 2019; Chen et al., 2021; Hiraoka et al., 2021; Nikishin et al., 2022). Different algorithms, however, substantially vary in their approaches to achieving high UTD ratios. Janner et al. (2019) def critic_loss(Q_params, policy_params, obs, acts, rews, next_obs): next_acts, next_logpi = policy.apply(policy_params, next_obs) # Concatenated forward pass all_q, new_Q_params = Q.apply(Q_params, jnp.concatenate([obs, next_obs]), jnp.concatenate([acts, next_acts]) ) # Split all_q predictions and stop gradient on next_q q, next_q = jnp.split(all_q, 2) next_q = jnp.min(next_q, axis=0) # min over double Q function next_q = jax.lax.stop_gradient(next_q - alpha * next_logpi) return jnp.mean((q - (rews + gamma * next_q))**2), new_Q_params Figure 2: CrossQ critic loss in JAX. The CrossQ critic loss is easy to implement on top of an existing SAC implementation. One just adds the batch normalization layers into the critic network and removes the target network. As we are now left with only the critic network, one can simply concatenate observations and next observations, as well as actions and next actions along the batch dimension, perform a joint forward pass, and split up the batches afterward. Combining two forward passes into one grants a small speed-up thanks to requiring only one CUDA call instead of two. uses a model to generate synthetic data, which allows for more overall gradient steps. Nikishin et al. (2022) adopt a simpler approach: they increase the number of gradient steps while periodically resetting the policy and critic networks to fight premature convergence to local minima. We now briefly outline the two high-UTD methods to which we compare CrossQ. REDQ. Chen et al. (2021) find that merely raising SAC’s UTD ratio hurts performance. They attribute this to the accumulation of the learned Q functions’ estimation bias over multiple update steps—despite the clipped double-Q trick—which destabilizes learning. To remedy this bias more strongly, they increase the number of Q networks from two to an ensemble of 10. Their method, called REDQ, permits stable training at high UTD ratios up to 20. DroQ. Hiraoka et al. (2021) note that REDQ’s ensemble size, along with its high UTD ratio, makes training computationally expensive. They instead propose using a smaller ensemble of Q functions equipped with Dropout (Srivastava et al., 2014), along with Layer Normalization (Ba et al., 2016) to stabilize training in response to the noise introduced by Dropout. Called DroQ, their method is computationally cheaper than REDQ, yet still expensive due to its UTD ratio of 20. 3 THE CROSSQ ALGORITHM In this paper, we challenge this current trend of high UTD ratios and demonstrate that we can achieve competitive sample efficiency at a much lower computational cost with a UTD = 1 method. CrossQ is our new state-of-the-art off-policy actor-critic algorithm. Based on SAC, it uses purely network-architectural engineering insights from deep learning to accelerate training. As a result, it crosses out much of the algorithmic design complexity that was added over the years and which led to the current state-of-the-art methods. In doing so, we present a much simpler yet more efficient algorithm. In the following paragraphs, we introduce the three design choices that constitute CrossQ. 3.1 DESIGN CHOICE 1: REMOVING TARGET NETWORKS Mnih et al. (2015) originally introduced target networks to stabilize the training of value-based off-policy RL methods, and today, most algorithms require them (Lillicrap et al., 2016; Fujimoto et al., 2018; Haarnoja et al., 2018a). SAC updates the critics’ target networks with Polyak Averaging \[ \theta^o \leftarrow (1 - \tau) \theta^o + \tau \theta, \] where \( \theta^o \) are the target network parameters, and \( \theta \) are those of the trained critic. Here \( \tau \) is the target network smoothing coefficient; with a high \( \tau = 1 \) (equivalent to cutting out the target network), SAC training can diverge, leading to explosive growth in \( \theta \) and the \( Q \) predictions. Target networks stabilize training by explicitly delaying value function updates, arguably slowing down online learning (Plappert et al., 2018; Kim et al., 2019; Morales, 2020). Recently, Yang et al. (2021) found that critics with Random Fourier Features can be trained without target networks, suggesting that the choice of layer activations affects the stability of training. Our experiments in Section 4.4 uncover an even simpler possibility: using bounded activation functions or feature normalizers is sufficient to prevent critic divergence in the absence of target networks, whereas the common choice of \texttt{relu} without normalization diverges. While others have used normalizers in Deep RL before, we are the first to identify that they make target networks redundant. Our next design choice exploits this insight to obtain an even greater boost. ### 3.2 Design Choice 2: Using Batch Normalization BatchNorm has not yet seen wide adoption in value-based off-policy RL methods, despite its success and widespread use in supervised learning (He et al., 2016; Santurkar et al., 2018), attempts at doing so have fared poorly. Lillicrap et al. (2016) use BatchNorm layers on the state-only representation layers in the DDPG critic but find that it does not help significantly. Others use BatchNorm in decoupled feature extractors for Deep RL networks (Ota et al., 2020; 2021), but not in critic networks. Hiraoka et al. (2021) report that using BatchNorm in critics causes training to fail in DroQ. We find using BatchNorm carefully, when additionally removing target networks, performs surprisingly well, trains stably, and is, in fact, algorithmically simpler than current methods. First, we explain why BatchNorm needs to be used carefully. Within the critic loss \( Q_\theta(S_t, A_t) - (r + \gamma Q_{\theta^\circ}(S_{t+1}, A_{t+1}))^2 \), predictions are made for two differently distributed batches of state-action pairs: \((S_t, A_t)\) and \((S', A')\), where \(A' \sim \pi_\phi(S')\) is sampled from the current policy, while \(A\) originates from old behavior policies. Just like the target network, the BatchNorm parameters are updated by Polyak Averaging from the live network (Equation 1). The BatchNorm running statistics of the live network, which were estimated from batches of \((s, a)\) pairs, will clearly not have seen samples \((s', \pi_\phi(s'))\) and will further not match their statistics. In other words, the state-action inputs evaluated by the target network will be out-of-distribution, given its mismatched BatchNorm running statistics. It is well known that the prediction quality of BatchNorm-equipped networks degrades in the face of such test-time distribution shifts (Pham et al., 2022; Lim et al., 2023). Removing the target network provides an elegant solution. With the target network removed, we can concatenate both batches and feed them through the \(Q\) network in a single forward pass, as illustrated in Figure 3 and shown in code in Figure 2. This simple trick ensures that BatchNorm’s normalization moments arise from the union of both batches, corresponding to a 50/50 mixture of their respective distributions. Such normalization layers do not perceive the \((s', \pi_\phi(s'))\) batch as being out-of-distribution. This small change to SAC allows the safe use of BatchNorm and greatly accelerates training. We are not the only ones to identify this way of using BatchNorm to tackle the distribution mismatch; other works in supervised learning, e.g., Test-Time Adaptation (Lim et al., Figure 5: CrossQ sample efficiency. Compared to REDQ and DroQ (UTD = 20) CrossQ (UTD = 1) performs either comparably, better, or—for the more challenging Humanoid tasks—substantially better. These results directly transfer to TD3 as the base algorithm in CrossQ (TD3). We plot interquartile mean (IQM) and 70% quantile interval of the episodic returns over 10 seeds. EvalNorm (Singh & Shrivastava, 2019), and Four Things Everyone Should Know to Improve Batch Normalization (Summers & Dinneen, 2020) also use mixed moments to bridge this gap. In practice, CrossQ’s actor and critic networks use Batch Renormalization (BRN, Ioffe (2017)), an improved version of the original BN (Ioffe & Szegedy, 2015) that is robust to long-term training instabilities originating from minibatch noise. BRN performs batch normalization using the less noisy running statistics after a warm-up period, instead of noisy minibatch estimates as in BN. In the rest of this paper, all discussions with “BatchNorm” apply equally to both versions unless explicitly disambiguated by BN or BRN. 3.3 Design Choice 3: Wider Critic Networks Following Ota et al. (2021), we find that wider critic network layers in CrossQ lead to even faster learning. As we show in our ablations in Section 4.4, most performance gains originate from the first two design choices; however, wider critic networks further boost the performance, helping to match or outperform REDQ and DroQ sample efficiency. We want to stress again that CrossQ, a UTD = 1 method, does not use bias-reducing ensembles, high UTD ratios or target networks. Despite this, it achieves its competitive sample efficiency at a fraction of the compute cost of REDQ and DroQ (see Figures 5 and 6). Note that our proposed changes can just as well be combined with other off-policy TD-learning methods, such as TD3, as shown in our experiments in Section 4.1. 4 Experiments and Analysis We conduct experiments to provide empirical evidence for CrossQ’s performance, and investigate: 1. Sample efficiency of CrossQ compared to REDQ and DroQ; 2. Computational efficiency in terms of wallclock time and performed gradient step; 3. Effects of the proposed design choices on the performance via Q function bias evaluations; And conduct further ablation studies for the above design choices. We evaluate across a wide range of continuous-control MuJoCo (Todorov et al., 2012) environments, with 10 random seeds each. Following Janner et al. (2019); Chen et al. (2021) and Hiraoka et al. (2021), we evaluate on the same four Hopper, Walker2d, Ant, and Humanoid tasks, as well as two additional tasks: HalfCheetah and the more challenging HumanoidStandup from Gymnasium (Towers et al., 2023). We adapted the JAX version of stable-baselines (Raffin et al., 2021) for our experiments. Figure 6: **Computational efficiency.** CrossQ trains an order of magnitude faster, taking only 5% of the gradient steps, substantially saving on wallclock time. The dashed horizontal lines are visual aids to better compare the final performance after training for $5 \times 10^6$ environment steps. We plot IQM and 70% quantile interval over 10 seeds. Appendix A.3 provides a table of wallclock times. ### 4.1 Sample Efficiency of CrossQ Figure 5 compares our proposed CrossQ algorithm with REDQ, DroQ, SAC and TD3 in terms of their sample efficiency, i.e., average episode return at a given number of environment interactions. As a proof of concept, we also present CrossQ (TD3), a version of CrossQ which uses TD3 instead of SAC as the base algorithm. We perform periodic evaluations during training to obtain the episodic reward. From these, we report the mean and standard deviations over 10 random seeds. All subsequent experiments in this paper follow the same protocol. This experiment shows that CrossQ matches or outperforms the best baseline in all the presented environments except on Ant, where REDQ performs better in the early training stage, but CrossQ eventually matches it. On Hopper, Walker, and HalfCheetah, the learning curves of CrossQ and REDQ overlap, and there is no significant difference. On the harder Humanoid and HumanoidStandup tasks, CrossQ and CrossQ (TD3) both substantially surpass all baselines. ### 4.2 Computational Efficiency of CrossQ Figure 6 compares the computational efficiency of CrossQ to the baselines. This metric is where CrossQ makes the biggest leap forward. CrossQ requires $20\times$ fewer gradient steps than REDQ and DroQ, which results in roughly $4\times$ faster wallclock speeds (Table 2). Especially on the more challenging Humanoid and HumanoidStandup tasks the speedup is the most pronounced. In our view, this is a noteworthy feature. On the one hand, it opens the possibility of training agents in a truly online and data-efficient manner, such as in real-time robot learning. On the other hand, with large computing budgets CrossQ can allow the training of even larger models for longer than what is currently feasible, because of its computational efficiency stemming from its low UTD = 1. ### 4.3 Evaluating Q Function Estimation Bias All methods we consider in this paper are based on SAC and, thus, include the clipped double-Q trick to reduce Q function overestimation bias (Fujimoto et al., 2018). Chen et al. (2021) and Hiraoka et al. (2021) stress the importance of keeping this bias even lower to achieve their high performances and intentionally design REDQ and DroQ to additionally reduce bias with explicit and implicit ensembling. In contrast, CrossQ outperforms both baselines without any ensembling. Could CrossQ’s high performance be attributed to implicitly reducing the bias as a side effect of our design choices? Using the same evaluation protocol as Chen et al. (2021), we compare the Figure 7: Q estimation bias does not reliably influence learning performance. Following the analysis of Chen et al. (2021), we plot the IQM and 70% quantile interval of the normalized Q function bias. REDQ generally has the least bias over 10 seeds. CrossQ matches or outperforms DroQ, REDQ and SAC while showing more Q function bias in all environments. The full set of environments is shown in Fig. 17 in the Appendix. We find that REDQ and DroQ indeed have lower bias than SAC and significantly lower bias than SAC with UTD = 20. The results for CrossQ are mixed: while its bias trend exhibits a lower mean and variance than SAC, in some environments, its bias is higher than DroQ, and in others, it is lower or comparable. REDQ achieves comparable or worse returns than CrossQ while maintaining the least bias. As CrossQ performs better despite having—perhaps paradoxically—generally higher Q estimation bias, we conclude that the relationship between performance and estimation bias is complex, and one does not seem to have clear implications on the other. 4.4 ABLATIONS We conduct ablation studies to better understand the impact of different design choices in CrossQ. 4.4.1 Disentangling the Effects of Target Networks and BatchNorm CrossQ changes SAC in three ways; of these, two explicitly aim to accelerate optimization: the removal of target networks, and the introduction of BatchNorm. Unfortunately, SAC without target networks diverges; therefore, to study the contribution of the first change, we need a way to compare SAC—divergence-free—with and without target networks. Fortunately, we find that such a way exists: according to our supplementary experiments in Appendix A.6, simply using bounded activation functions in the critic appears to prevent divergence. This is a purely empirical observation and an in-depth study regarding the influence of activations and normalizers on the stability of Deep RL is beyond the scope of this paper. In this specific ablation, we use tanh activations instead of relu, solely as a tool to make the intended comparison possible. Figure 8 shows the results of our experiment. The performance of SAC without target networks supports the common intuition that target networks indeed slow down learning to a small extent. We find that the combination of BatchNorm and Target Networks performs inconsistently, failing to learn anything in half of the environments. Lastly, the configuration of BatchNorm without target networks—and the closest to CrossQ—achieves the best aggregate performance, with the boost being significantly bigger than that from removing target networks alone. In summary, even though removing target networks may slightly improve performance in some environments, it is the combination of removing target networks and adding BatchNorm that accelerates learning the most. Figure 8: The effects of target networks and BatchNorm on sample efficiency. All SAC variants in this experiment use critics with tanh activations, since they allow divergence-free training without target networks, enabling this comparison. This ablation uses the original BatchNorm (BN, Ioffe & Szegedy (2015)). Removing target networks (−TN) provides only small improvements over the SAC baseline with target nets. BatchNorm with target nets (+BN, green) is unstable. Using BatchNorm after removing target nets (−TN+BN)—the configuration most similar to CrossQ—performs best. We plots IQM return and 70% quantile intervals over 10 seeds. 4.4.2 Ablating the Different Design Choices and Hyperparameters In this subsection, we examine the contributions of the different CrossQ design choices to show their importance. Figure 9 shows aggregated ablations of these components and various hyperparameters, while Figure 10 ablates the BatchNorm layer itself. Hyperparameters. CrossQ uses the best hyperparameters obtained from a series of grid searches. Of these, only three are different from SAC’s default values. First, we find that reducing the $\beta_1$ momentum for the Adam optimizer (Kingma & Ba, 2015) from 0.9 to 0.5 as well the policy delay of 3 have the smallest impact on the performance. However, since fewer actor gradient steps reduce compute, this setting is favorable. Second, reducing the critic network’s width to 256—the same small size as SAC—reduces performance and yet still significantly outperforms SAC. This suggests that practitioners may be able to make use of a larger compute budget, i.e., train efficiently across a range of different network sizes, by scaling up layer widths according to the available hardware resources. Third, as expected, removing the BRN layers proves to be detrimental and results in the worst overall performance. A natural question that comes to mind is whether other normalization strategies in the critic, such as Layer Normalization (LayerNorm, Ba et al. (2016)), would also give the same results. However, in our ablation, we find that replacing BatchNorm with LayerNorm degrades CrossQ’s performance significantly, roughly to the level of the SAC baseline. Lastly, SAC does not benefit from simply widening critic layers to 2048. And naively adding BRN to SAC while keeping the target networks proves detrimental. This finding is in line with our diagnosis of mismatched statistics being detrimental to the training. Figure 9: Ablations on CrossQ and SAC. Loss in IQM return in percent—relative to CrossQ—at 1M environment interactions. Aggregated over all environments and six seeds each, with 95% bootstrapped confidence intervals (Agarwal et al., 2021). Left shows CrossQ ablations; Right shows effects of adding parts on top of SAC. Figure 13 in Appendix shows individual training curves. Figure 10: **Comparing BatchNorm hyperparameters.** All variants have comparably strong and stable curves early in the training. Omitting normalization in the actor (BRN critic only) does not significantly affect CrossQ. Using the original Batch Normalization (BN, with moving-average momentum 0.99) is prone to sudden performance collapses during longer training runs. Using BRN permits stabler training, which improves with higher momentums; CrossQ’s default 0.99 (black) and higher show no collapses. We plot IQM return and 70% quantile intervals over five seeds. **Batch Normalization Layers.** In Figure 10, we ablate the BatchNorm versions (BN (Ioffe & Szegedy, 2015) and BRN (Ioffe, 2017)) and their internal moving-average momentums. Compared to CrossQ’s optimal combination—BRN with momentum 0.99—all variants have similar sample efficiency in the early stages of training (1M steps). When using BN, we sometimes observe sudden performance collapses later in training; we attribute these to BN’s unique approach of using noisy minibatch estimates of normalization moments. BRN’s improved approach of using the less noisy moving-averages makes these collapses less likely; further noise-reduction via higher momentums eliminates these collapses entirely. Additionally, we find that using BatchNorm only in the critic (instead of both the actor and the critic) is sufficient to drive the strong performance of CrossQ; however, including it in both networks performs slightly better. ## 5 Conclusion & Future Work We introduced CrossQ, a new off-policy RL algorithm that matches or exceeds the performance of REDQ and DroQ—the current state-of-the-art on continuous control environments with state observations—in terms of sample efficiency while being multiple times more computationally efficient. To the best of our knowledge, CrossQ is the first method to successfully use BatchNorm to greatly accelerate off-policy actor-critic RL. Through benchmarks and ablations, we confirmed that target networks do indeed slow down training and showed a way to remove them without sacrificing training stability. We also showed that BatchNorm has the same accelerating effect on training in Deep RL as it does in supervised deep learning. The combined effect of removing target networks and adding BatchNorm is what makes CrossQ so efficient. We investigated the relationship between the Q estimation bias and the learning performance of CrossQ, but did not identify a straightforward dependence. This indicates that the relationship between the Q estimation bias and the agent performance is more complex than previously thought. In future work, it would be interesting to analyze the Q estimation bias more extensively, similar to Li et al. (2022). Furthermore, a deeper theoretical analysis of the used BatchNorm approach in the context of RL would be valuable, akin to the works in supervised learning, e.g., Summers & Dinneen (2020). Although the wider critic networks do provide an additional performance boost, they increase the computation cost, which could potentially be reduced. Finally, while our work focuses on the standard continuous control benchmarking environments, a logical extension would be applying CrossQ to a real robot system and using visual observations in addition to the robot state. Techniques from image-based RL, such as state augmentation (Laskin et al., 2020; Yarats et al., 2021) and auxiliary losses (Schwarzer et al., 2021; He et al., 2022), also aim to learn efficiently from limited data. We believe some of these ideas could potentially be applied to CrossQ. ACKNOWLEDGMENTS We acknowledge the grant “Einrichtung eines Labors des Deutschen Forschungszentrum für Künstliche Intelligenz (DFKI) an der Technischen Universität Darmstadt” of the Hessisches Ministerium für Wissenschaft und Kunst. This research was also supported by the Research Clusters “The Adaptive Mind” and “Third Wave of AI”, funded by the Excellence Program of the Hessian Ministry of Higher Education, Science, Research and the Arts, Hessian.AI and by the German Research Foundation (DFG): 417962828. REFERENCES Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron Courville, and Marc G Bellemare. Deep reinforcement learning at the edge of the statistical precipice. In Advances in neural information processing systems, 2021. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. Xinyue Chen, Che Wang, Zijian Zhou, and Keith Ross. Randomized ensembled double Q-learning: Learning fast without a model. In International conference on learning representations, 2021. Jianqing Fan, Zhaoran Wang, Yuchen Xie, and Zhuoran Yang. A theoretical analysis of deep Q-learning. In Learning for dynamics and control, 2020. Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In International conference on machine learning, 2018. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning, 2018a. Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, et al. Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905, 2018b. Hado Hasselt. Double Q-learning. In Advances in neural information processing systems, 2010. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Conference on computer vision and pattern recognition, 2016. Tairan He, Yuge Zhang, Kan Ren, Minghuan Liu, Che Wang, Weinan Zhang, Yuqing Yang, and Dongsheng Li. Reinforcement learning with automated auxiliary loss search. In Advances in neural information processing systems, 2022. Takuya Hiraoka, Takahisa Imagawa, Taisei Hashimoto, Takashi Onishi, and Yoshimasa Tsuruoka. Dropout q-functions for doubly efficient reinforcement learning. In International conference on learning representations, 2021. Sergey Ioffe. Batch renormalization: Towards reducing minibatch dependence in batch-normalized models. In Advances in neural information processing systems, 2017. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, 2015. Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model: Model-based policy optimization. In Advances in neural information processing systems, 2019. Seungchan Kim, Kavosh Asadi, Michael L. Littman, and George Dimitri Konidaris. Deepmellow: Removing the need for a target network in deep Q-learning. In International joint conference on artificial intelligence, 2019. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International conference on learning representations, 2015.
3y2TfP966N
The task in Sec 4.2.1 aims to regress difference in the representation to the time distance. As stated in point 1, again, various pattern difference might be contained in the representations from arbitrary pairs of time series, therefore it's very likely that they can not regress to the consistent time distance. Same concern applies to the other task
T-Rep: Representation Learning for Time Series using Time-Embeddings Archibald Fraikin Let it Care PariSanté Campus, Paris, France archibald.fraikin@inria.fr Adrien Bennetot Let it Care PariSanté Campus, Paris, France adrien.bennetot@letitcare.com Stéphanie Allassonière Université Paris Cité, INRIA, Inserm, SU Centre de Recherche des Cordeliers, Paris stephanie.allassoniere@inria.fr Abstract Multivariate time series present challenges to standard machine learning techniques, as they are often unlabeled, high dimensional, noisy, and contain missing data. To address this, we propose T-Rep, a self-supervised method to learn time series representations at a timestep granularity. T-Rep learns vector embeddings of time alongside its feature extractor, to extract temporal features such as trend, periodicity, or distribution shifts from the signal. These time-embeddings are leveraged in pretext tasks, to incorporate smooth and fine-grained temporal dependencies in the representations, as well as reinforce robustness to missing data. We evaluate T-Rep on downstream classification, forecasting, and anomaly detection tasks. It is compared to existing self-supervised algorithms for time series, which it outperforms in all three tasks. We test T-Rep in missing data regimes, where it proves more resilient than its counterparts. Finally, we provide latent space visualisation experiments, highlighting the interpretability of the learned representations. 1 Introduction Multivariate time series have become ubiquitous in domains such as medicine, climate science, or finance. Unfortunately, they are high-dimensional and complex objects with little data being labeled (Yang & Wu, 2006), as it is an expensive and time-consuming process. Leveraging unlabeled data to build unsupervised representations of multivariate time series has thus become a challenge of great interest, as these embeddings can significantly improve performance in tasks like forecasting, classification, or anomaly detection (Deldari et al., 2021; Su et al., 2019). This has motivated the development of self-supervised learning (SSL) models for time series, first focusing on constructing instance-level representations for classification and clustering (Tonekaboni et al., 2021; Franceschi et al., 2019; Wu et al., 2018). More fine-grained representations were then developed to model time series at the timestep-level (Yue et al., 2022), which is key in domains such as healthcare or sensor systems. With fine-grained embeddings, one can capture subtle changes, periodic patterns, and irregularities that are essential for anomaly detection (Keogh et al., 2006) as well as understanding and forecasting disease progression. These representations can also be more resilient than raw data in the face of inter-sample variability or missing data (Yue et al., 2022), common issues in Human Activity Recognition (HAR) and medicine. A central issue when learning representations of time series is the incorporation of time in the latent space, especially for timestep-level embeddings. In SSL, the temporal structure is learned thanks to the pretext tasks. In current state-of-the-art (SOTA) models, these tasks are contrastive (Tonekaboni et al., 2021; Yue et al., 2022; Banville et al., 2021), which poses important limitations (Zhang et al., 2023). In contrastive techniques, the learning signal is binary: positive pairs should be similar, while negative pairs should be very different (Chen et al., 2020). This makes obtaining a continuous or fine-grained notion of time in the embeddings unfeasible, as these tasks only express whether two points should be similar, but not how close or similar they should be. Embedded trajectories are thus unlikely to accurately reflect the data’s temporal structure. Further, temporal contrastive tasks are incompatible with finite-state systems, where the signal transitions between $S$ states through time, regularly (periodic signal) or irregularly. Such tasks define positive pairs by proximity in time, and negative pairs by points that are distant in time [Banville et al., 2021; Franceschi et al., 2019], which can incur sampling bias issues. Points of a negative pair might be far in time but close to a period apart (i.e. very similar) and points of a positive pair might be close but very different (think of a pulsatile signal for example). This incoherent information hinders learning and may result in a poor embedding structure. Finite-state systems are extremely common in real-world scenarios such as sensor systems, medical monitoring, or weather systems, making the treatment of these cycles crucial. To address the above issues, we propose T-Rep, a self-supervised method for learning fine-grained representations of (univariate and multivariate) time series. T-Rep improves the treatment of time in SSL thanks to the use of time-embeddings, which are integrated in the feature-extracting encoder and leveraged in the pretext tasks, helping the model learn detailed time-related features. We define as time-embedding a vector embedding of time, obtained as the output of a learned function $h_\psi$, which encodes temporal signal features such as trend, periodicity, distribution shifts etc. Time-embeddings thus enhance our model’s resilience to missing data, and improve its performance when faced with finite-state systems and non-stationarity. We evaluate T-Rep on a wide variety of datasets in classification, forecasting and anomaly detection (see section 5), notably on Sepsis [Reyna et al., 2020a; Goldberger et al., 2000] a real-world dataset containing multivariate time series from 40,336 patients in intensive care units (ICU), featuring noisy and missing data. Our major contributions are summarised as follows: • To the best of our knowledge, we propose the first self-supervised framework for time series to leverage time-embeddings in its pretext tasks. This helps the model learn fine-grained temporal dependencies, giving the latent space a more coherent temporal structure than existing methods. The use of time-embeddings also encourages resilience to missing data, and produces more information-dense and interpretable embeddings. • We compare T-Rep to SOTA self-supervised models for time series in classification, forecasting and anomaly detection. It consistently outperforms all baselines whilst using a lower-dimensional latent space, and also shows stronger resilience to missing data than existing methods. Further, our latent space visualisation experiments show that the learned embeddings are highly interpretable. 2 RELATED WORK Representation Learning for time series The first techniques used in the field were encoder-decoder based, trained to reconstruct the original time series. Such models include Autowarp [Abid & Zou, 2018], TimeNet [Malhotra et al., 2017] and LSTM-SAE [Sagheer & Kotb, 2019], which all feature an RNN-based architecture. Variational Auto-Encoders [Kingma & Welling, 2013] inspired models have also been used, notably Interfusion [Li et al., 2021], SOM-VAE [Fortuin et al., 2018], and OmniAnomaly [Su et al., 2019] which combines a VAE with normalising flows. Encoder-only methods have been more popular recently, often based on contrastive approaches [Zhang et al., 2023]. The Contrastive Predictive Coding (CPC) [Oord et al., 2018] framework tries to maximise the mutual information between future latent states and a context vector, using the infoNCE loss. This approach has been adapted for anomaly detection [Deldari et al., 2021] and general representations [Eldele et al., 2021]. TS-TCC [Eldele et al., 2021] augments CPC by applying weak and strong transformations to the raw signal. Also, augmentation-based contrastive methods in computer vision [Chen et al., 2020] have been adapted to time series by changing the augmentations [Kiyasseh et al., 2021]. Domain-specific transformations were proposed for wearable sensors [Cheng et al., 2020] and ECGs [Kiyasseh et al., 2021], such as noise injection, cropping, warping, and jittering [Pöppelbaum et al., 2022]. The issue with these methods is they make transformation-invariance assumptions which may not be satisfied by the signal [Zhang et al., 2023; Yue et al., 2022]. TS2Vec [Yue et al., 2022] addresses this with contextual consistency. Time-Embeddings in time series representations have only been used in transformer-based architectures, which require a positional encoding module (Vaswani et al., 2017). While some use the original fixed sinusoidal positional encoding (Haresamudram et al., 2020; Zhang et al., 2022), Zerveas et al. (2021) and Tipirneni & Reddy (2022) chose to learn a time-embedding using a linear layer and a fully-connected layer respectively. Using or learning more sophisticated time-embeddings is a largely unexplored avenue that seems promising for dealing with long-term trends and seasonality (periodicity) in sequential data (Zhang et al., 2023; Wen et al., 2022). The most elaborate time-embedding for time series is Time2Vec (Kazemi et al., 2019), which was developed for supervised learning tasks and has not yet been exploited in a self-supervised setting. In existing self-supervised models, the time-embedding is used by the encoder to provide positional information (Zerveas et al., 2021; Tipirneni & Reddy, 2022; Haresamudram et al., 2020; Zhang et al., 2022), but is never exploited in pretext tasks. The best performing models use contrastive techniques to learn temporal dependencies, which only provide a binary signal (Zhang et al., 2023). T-Loss (Franceschi et al., 2019) follows the assumption that neighboring windows should be similar, and builds a triplet loss around this idea. TNC (Tonekaboni et al., 2021) extends this framework, dividing the signal into stationary windows to construct its positive and negative pairs. In Banville et al. (2021); Yue et al. (2022); Franceschi et al. (2019), positive and negative pairs are delimited by a threshold on the number of differing timesteps, which makes these methods unsuited to capturing periodic or irregularly recurring patterns in the data. Further, all these contrastive methods are quite coarse, making it hard to learn fine-grained temporal dependencies. To summarise, existing methods have made tremendous progress in extracting spatial features from time series, but temporal feature learning is still limited. In particular, they are not suited to handling recurring patterns (periodic or irregular), and struggle to learn fine-grained temporal dependencies, because of the binary signal and sampling bias of contrastive tasks. 3 BACKGROUND The aim of this work is to improve the treatment of time in representation learning for temporal data. These innovations are combined with state-of-the-art methods for spatial feature-learning and model training, the contextual consistency and hierarchical loss frameworks (Yue et al., 2022). 3.1 Problem Definition Given a dataset \( X = \{x_1, ..., x_N\} \in \mathbb{R}^{N \times T \times C} \) of \( N \) time series of length \( T \) with \( C \) channels, the objective of self-supervised learning is to learn a function \( f_\theta \), s.t. \( \forall i \in [0, N], z_i = f_\theta(x_i) \). Each \( z_i \in \mathbb{R}^{T \times F} \) is a representation of \( x_i \), of length \( T \) and with \( F \) channels, which should preserve as many features of the original data as possible. \( f_\theta(\cdot) \) is learned by designing artificial supervised signals, called pretext tasks, from the unlabeled data \( X \). 3.2 Contextual Consistency The objective of contextual consistency is to learn context-invariant representations of time series. The idea is to sample two overlapping segments \( x_1 \) and \( x_2 \) of the time series \( x \), to which random timestamp masking is applied, thus creating two different contexts (random choice of window and masked timesteps) for the overlapping window. Representations in the overlapping timesteps are then encouraged to be similar, leading to context-invariant representations. Two contrastive tasks were introduced by Yue et al. (2022) alongside the contextual consistency framework to extract spatial and temporal features. Instance-wise contrasting encourages representations of the same time series under different contexts to be encoded similarly, and for different instances to be dissimilar (Yue et al., 2022). Let \( B \) be the batch-size, \( i \) the time series index and \( t \) a timestep. \( z_{i,t} \) and \( z'_{i,t} \) denote the corresponding representation vectors under 2 different contexts. The loss function is given by: \[ L_{inst}^{(i,t)} = - \log \frac{\exp(z_{i,t} \cdot z'_{i,t})}{\sum_{j=1}^{B} (\exp(z_{i,t} \cdot z'_{j,t}) + \mathbb{1}_{i \neq j} \exp(z_{i,t} \cdot z_{j,t}))}. \] Temporal contrasting encourages representations of time series under different contexts to be encoded similarly when their respective timesteps match, and far apart when the timesteps differ (Yue et al., 2022). The loss function is given in Eq. 2 where $\Omega$ is the set of timesteps in the overlap between the 2 subseries: $$L_{\text{temp}}^{(i,t)} = -\log \frac{\exp(z_{i,t} \cdot z'_{i,t})}{\sum_{t' \in \Omega} \left( \exp(z_{i,t} \cdot z'_{i,t'}) + 1_{t \neq t'} \exp(z_{i,t} \cdot z_{i,t'}) \right)}. \quad (2)$$ ### 3.3 Hierarchical Loss The hierarchical loss framework applies and sums the model’s loss function at different scales, starting from a per-timestep representation and applying maxpool operations to reduce the time-dimension between scales (Yue et al., 2022). This gives users control over the granularity of the representation used for downstream tasks, without sacrificing performance. It also makes the model more robust to missing data, as it makes use of long-range information in the surrounding representations to reconstruct missing timesteps (Yue et al., 2022). ## 4 Method ### 4.1 Encoder Architecture We present below our convolutional encoder, which contains 3 modules. The overall model structure is illustrated in Figure 1. ![Figure 1](image) **Linear Projection Layer** The first layer projects individual points $x_{i,t} \in \mathbb{R}^C$ to vectors $u_{i,t} \in \mathbb{R}^F$ with a fixed number of channels $F$. Random timestamp masking is applied to each $u_i$ independently after the linear projection (only during training), as part of the contextual consistency framework (Yue et al., 2022). **Time-Embedding Module** The time-embedding module $h_\psi$ is responsible for learning time-related features $\tau_t$ (trend, periodicity, distribution shifts etc.) directly from the time series sample indices $t$. The time-embedding function is not fixed like a transformer’s positional encoding module, it is learned jointly with the rest of the encoder. This makes the time-embeddings flexible, they adapt to the data at hand. The choice of architecture for the time-embedding module can impact performance in downstream tasks, and is discussed in Appendix A.4. For general applications, we recommend using Time2Vec (Kazemi et al., 2019), which captures trend and periodicity. To the best of our knowledge, T-Rep is the first model to combine a time-embedding module and a convolutional encoder in self-supervised learning for time series. The time-embedding module must return vectors which define a probability distribution (positive components that sum to 1). This is due to the use of statistical divergence measures in a pretext task, which is detailed in section 4.2.1. We find experimentally that the optimal way to satisfy this constraint is by applying a sigmoid activation to the final layer of the module, and then dividing each element by the vector sum: \[ (\tau_t)_k = \frac{\sigma(h_\psi(t))_k}{\sum_{j=1}^{K} \sigma(h_\psi(t))_j}, \] where \( \tau_t \) contains \( K \) elements, \( \sigma(\cdot) \) is the sigmoid function and \( h_\psi \) is the time-embedding module parameterised by \( \psi \). Time-embeddings \( \tau_t \) are concatenated with vectors \( u_{i,t} \) after the linear projection, and the vectors \([u_{i,t} \; \tau_t]^T\) are fed to the encoder \( f_\theta \). **Temporal Convolution Network (TCN) Encoder** The main body of the encoder, \( f_\theta(\cdot) \), is structured as a sequence of residual blocks, each containing two layers of 1D dilated convolutions interleaved with GeLU activations. The convolution dilation parameter increases with the network depth, to first focus on local features and then longer-term dependencies: \( d = 2^i \), where \( i \) is the block index. Dilated convolutions have proven to be very effective in both supervised and unsupervised learning for time series [Zhou et al., 2021; Tonekaboni et al., 2021; Bai et al., 2018]. ### 4.2 Pretext Tasks We present below two novel SSL pretext tasks which leverage time-embeddings, and are designed to complement each other. The first, ‘Time-embedding Divergence Prediction’, describes how the information gained through time-embeddings should structure the latent space and be included in the time series representations. On the other hand, the ‘Time-embedding-conditioned Forecasting’ task focuses on what information the time-embeddings and representations should contain. #### 4.2.1 Time-embedding Divergence Prediction The first pretext task developed aims to integrate the notion of time in the latent space structure. It consists in predicting a divergence measure between two time-embeddings \( \tau \) and \( \tau' \), given the representations at the corresponding time steps. The purpose of this task is for distances in the latent space to correlate with temporal distances, resulting in smoother latent trajectories than with contrastive learning. Let us define this regression task formally. Take a batch \( X \in \mathbb{R}^{B \times T \times C} \), from which we sample \( x_{i,t} \) and \( x_{j,t'} \forall i,j \in [0,B] \) and \( t,t' \in [0,T] \) s.t. \( t \neq t' \). The task input is the difference \( z_{i,t} - z'_{j,t'} \), where \( z_{i,t} \) is the representation of \( x_{i,t} \) under the context \( c \) and \( z'_{j,t'} \) is the representation of \( x_{j,t'} \) under the context \( c' \). Taking representations under different contexts further encourages context-invariant representations, as detailed in section 3.2. The regression target is \( y = D(\tau, \tau') \). \( \tau \) and \( \tau' \) are the respective time-embeddings of \( t \) and \( t' \), and \( D \) is a measure of statistical divergence, used to measure the discrepancy between the time-embedding distributions. We use the Jensen-Shannon divergence (JSD), a smoothed and symmetric version of the KL divergence [Lin, 1991]. The task loss is: \[ L_{div} = \frac{1}{M} \sum_{(i,j,t,t') \in \Omega} \left( G_1(z_{i,t} - z'_{j,t'}) - JSD(\tau_t || \tau_{t'}) \right)^2, \] where \( \Omega \) is the set (of size \( M \)) of time/instance indices for the randomly sampled pairs of representations. Using divergences allows us to capture how two distributions measures differ, whereas a simple norm could only capture by how much two vectors differ. This nuance is important - suppose the time-embedding is a 3-dimensional vector that learned a hierarchical representation of time (equivalent to seconds, minutes, hours). A difference of 1.0 on all time scales (01:01:01) represents a very different situation to a difference of 3.0 hours and no difference in minutes and seconds (03:00:00), but could not be captured by a simple vector norm. #### 4.2.2 Time-embedding-conditioned Forecasting Our second pretext task aims to incorporate predictive information in the time-embedding vectors, as well as context-awareness in the representations, to encourage robustness to missing data. The task takes in the representation of a time series at a specific timestep, and tries to predict the representation vector of a nearby point, conditioned on the target’s time-embedding. The input used is the concatenation \([z_{i,t}, \tau_{t+\Delta}]^T\) of the representation \(z_{i,t} \in \mathbb{R}^F\) at time \(t\) and the time-embedding of the target \(\tau_{t+\Delta} \in \mathbb{R}^K\). \(\Delta_{max}\) is a hyperparameter to fix the range in which the prediction target can be sampled. The target is the encoded representation \(z_{i,t+\Delta}\) at a uniformly sampled timestep \(t + \Delta\), \(\Delta \sim \mathcal{U}[-\Delta_{max}, \Delta_{max}]\). The input is forwarded through the task head \(G_2 : \mathbb{R}^{F+K} \mapsto \mathbb{R}^F\), a 2-layer MLP with ReLU activations. The loss is a simple MSE given by: \[ L_{pred} = \frac{1}{MT} \sum_{j \in \Omega_M} \sum_{t \in \Omega_T} \left( G_2 \left( [z_{i,t}^{(c_1)}, \tau_{t+\Delta_j}]^T \right) - z_{i,t+\Delta_j}^{(c_2)} \right)^2, \] where \(\Delta_j \sim \mathcal{U}[-\Delta_{max}, \Delta_{max}]\), \(\Omega_M\) and \(\Omega_T\) are the sets of randomly sampled instances and timesteps for each batch, whose respective cardinalities are controlled by hyperparameters \(M\) and \(T\). The contexts \(c_1\) and \(c_2\) are chosen randomly from \(\{c, c'\}\), so they may be identical or different, further encouraging contextual consistency. Conditioning this prediction task on the time-embedding of the target forces the model to extract as much information about the signal as possible from its position in time. This results in more information-dense time-embeddings, which can be leveraged when working with individual trajectories for forecasting and anomaly detection. In practice, we choose a short prediction range \(\Delta_{max} \leq 20\), as the focus is not to build representations tailored to forecasting but rather ‘context-aware’ representations. This context-awareness is enforced by making predictions backwards as well as forwards, encouraging representations to contain information about their surroundings, making them robust to missing timesteps. Longer prediction horizons would push representations to contain more predictive features than spatial features, biasing the model away from use-cases around classification, clustering and other ‘comparative’ or instance-level downstream tasks. A key objective of this pretext task is to build resilience to missing data. This is done by (1) learning information-dense time-embeddings, which are available even when data is missing, and (2) by learning context-aware representations, which can predict missing timesteps in their close vicinity. ## 5 EXPERIMENTS This sections presents the experiments conducted to evaluate T-Rep’s learned representations. Because of the variety of downstream tasks, we perform no hyperparameter tuning, and use the same hyperparameters across tasks. Further, the same architectures and hyperparameters are used across all evaluated models where possible, to ensure a fair comparison. Experimental details and guidelines for reproduction are included in Appendix A.2 and A.3. The code written to produce these experiments has been made publicly available.\(^1\) ### 5.1 TIME SERIES ANOMALY DETECTION We perform two experiments, point-based anomaly detection on the Yahoo dataset (Nikolay Laptev [2015]), and segment-based anomaly detection on the 2019 PhysioNet Challenge’s Sepsis dataset (Reyna et al. [2020a], Goldberger et al. [2000]). We chose to include both types of tasks as segment-based anomaly detection tasks help avoid the bias associated with point-adjusted anomaly detection (Kim et al. [2022]). Yahoo contains 367 synthetic and real univariate time series, featuring outlier and change point anomalies. Sepsis is a real-world dataset containing multivariate time series from 40,336 patients in intensive care units, featuring noisy and missing data. The task consists in detecting sepsis, a medical anomaly present in just 2.2% of patients. On both datasets, we compare T-Rep to the self-supervised model TS2Vec (Yue et al. [2022]), as well as a baseline following the same anomaly detection protocol as TS2Vec. | | Yahoo (F1) | Sepsis (F1) | |------------------|------------|-------------| | Baseline | 0.110 | 0.241 | | TS2Vec | 0.733 | 0.619 | | **T-Rep (Ours)** | **0.757** | **0.666** | Table 1: Time series anomaly detection F1 scores, on the Yahoo dataset for point-based anomalies and Sepsis datasets for segment-based anomalies. Anomalies include outliers as well as changepoints. TS2Vec results are reproduced using official source code (Zhihan Yue [2021]). \(^1\)https://github.com/Let-it-Care/T-Rep and T-Rep on each dataset, but using the raw data. We follow a streaming anomaly detection procedure (Ren et al., 2019) on both datasets. Details on the procedures can be found in Appendix A.2.3 which also details the pre-processing applied to Sepsis. Table 7 shows the evaluation results (more detailed results are presented in Appendix A.9). T-Rep achieves the strongest performance in both datasets, with an F1 score of 75.5% on Yahoo and 66.6% on Sepsis. This respectively represents a 2.4% and 4.8% increase on the previous SOTA TS2Vec (Yue et al., 2022). T-Rep’s performance can be attributed to its detailed understanding of temporal features, which help it better detect out-of-distribution or anomalous behaviour. It achieves this performance with a latent space dimension of 200 on Yahoo, which is smaller than the 320 dimensions used by TS2Vec (Yue et al., 2022), further showing that T-Rep learns more information-dense representations than its predecessors. ### 5.2 TIME SERIES CLASSIFICATION The classification procedure is similar to that introduced by Franceschi et al. (2019): a representation \( z \) is produced by the self-supervised model, and an SVM classifier with RBF kernel is then trained to classify the representations (see Appendix A.2 and A.3 for procedure and reproduction details). We compare T-Rep to SOTA self-supervised models for time series: TS2Vec (Yue et al., 2022), T-Loss (Franceschi et al., 2019), TS-TCC (Eldеле et al., 2021), TNC (Tonekaboni et al., 2021), Minirocket (Dempster et al., 2021) and a DTW-based classifier (Müller, 2007). These models are evaluated on the UEA classification archive’s 30 multivariate time series, coming from diverse domains such as medicine, sensor systems, speech, and activity recognition (Dau et al., 2019). Evaluation results are summarised in Table 2 and full results are shown in Appendix A.7 along with more details on the chosen evaluation metrics. Table 2 shows that T-Rep has an accuracy +2.1% higher than TS2Vec and +4.8% higher than Minirocket on average. In terms of average accuracy, T-Rep’s accuracy outperforms all competitors except Minirocket, which has a 0.001 lead. TS2Vec’s performance is very close to T-Rep’s, only 0.07 lower. T-Rep has a positive ‘Avg. difference’ to Minirocket despite having a lower ‘Avg. Acc.’ because Minirocket often performs slightly better than T-Rep, but when T-Rep is more accurate, it is so by a larger margin. It is important to note that Minirocket was developed specifically for time series classification (and is the SOTA as of this date), while T-Rep is a general representation learning model, highlighting its strengths across applications. The latent space dimensionality is set to 320 for all baselines, except Minirocket which uses 9996 dimensions (as per the official code). T-Rep uses only 128 dimensions, but leverages the temporal dimension of its representations (see Appendix A.2.4 for details). ### 5.3 TIME SERIES FORECASTING We perform a multivariate forecasting task on the four public ETT datasets (ETTh1, ETTh2, ETTm1, ETTm2), which contain electricity and power load data, as well as oil temperature (Zhou et al., 2021). Forecasting is performed over multiple horizons, using a procedure described in Appendix A.2.2. We evaluate T-Rep against TS2Vec (Yue et al., 2022), a SOTA self-supervised model for time series, but also Informer (Zhou et al., 2021), the SOTA in time series forecasting, as well as TCN (Bai et al., 2018), a supervised model with the same backbone architecture as T-Rep, and a linear model trained on raw data. Aggregate results, averaged over all datasets and prediction horizons, are presented in Table 3 (full results are in Appendix A.8). | Method | Avg. Acc. | Avg. Difference (%) | |------------|-----------|---------------------| | T-Loss | 0.657 | 33.6 | | TS2Vec | 0.699 | 2.1 | | TNC | 0.671 | 12.3 | | TS-TCC | 0.667 | 10.0 | | DTW | 0.650 | 43.6 | | Minirocket | **0.707** | 4.8 | | T-Rep (Ours)| 0.706 | – | Table 2: Multivariate time series classification results on the UEA archive. DTW, TNC, and TS-TCC results are taken directly from Yue et al. (2022), while TS2Vec and Minirocket results are reproduced using the official code. ‘Avg. Difference’ measures the relative difference in accuracy brought by T-Rep compared to a given model. Table 3 shows that T-Rep achieves the best average scores, in terms of both MSE and MAE, with a 24.2% decrease in MSE on the supervised SOTA Informer (Zhou et al., 2021), and a slight improvement of 1.80% on the self-supervised SOTA TS2Vec (Yue et al., 2022). It also achieves a better average rank, ranking first more than any other model. Furthermore, the linear baseline is the second model that most frequently ranks first, beating TS2Vec in this metric. However, this model does not perform well in all datasets and therefore ranks 3rd in terms of MAE and MSE. Interestingly, most existing self-supervised methods for time series use high-dimensional latent spaces ($F = 320$ dimensions per timestep) (Franceschi et al., 2019; Tonekaboni et al., 2021), which is thus used to produce baseline results in Table 3. This can be an issue for downstream applications, which might face the curse of dimensionality (Verleysen & François, 2005). T-Rep, however, outperforms all baselines with a latent space that is almost 3 times smaller, using only $F = 128$ dimensions. T-Rep’s superior performance can be attributed to its comprehensive treatment of time, capturing trend, seasonality, or distribution shifts of more easily with its time-embedding module. | | T-Rep (Ours) | TS2Vec | Informer | TCN | Linear Baseline | |----------|--------------|--------|----------|-----|----------------| | | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | | Avg. Rank| 1.90 | 1.85 | 2.40 | 2.45 | 3.3 | 3.55 | 4.35 | 4.1 | 3.05 | 3.05 | | Ranks 1st| 8 | 8 | 2 | 1 | 3 | 3 | 1 | 1 | 6 | 7 | | Avg. | 0.986 | 0.702 | 1.004 | 0.712 | 1.300 | 0.820 | 2.012 | 1.205 | 1.017 | 0.727 | Table 3: Multivariate time series forecasting results on the ETT datasets, over 5 different prediction horizons. The presented results are averaged over all datasets/prediction horizons. The ‘Ranks 1st’ metric counts the number of times a model ranks 1st amongst its competitors. Results for all models are based on our own reproductions, using the official code for each model (see Appendix A.2). 5.4 ROBUSTNESS TO MISSING DATA T-Rep was developed with medical and HAR applications in mind, requiring strong resilience to missing data, which we evaluate in two different experiments. In the first qualitative experiment, we visualize representations of incomplete time series using the DodgerLoopGame dataset from the UCR archive Dau et al. (2019), which features a continuous segment of 25 missing timesteps (see top row of Figure 2a). We then visualised a heatmap of T-Rep and TS2Vec’s time series representations (bottom row), showing the 15 dimensions with highest variance. T-Loss (Franceschi et al., 2019) is not included as it does not produce timestep-level representations. For TS2Vec, the representations of missing timesteps very much stand out (they are brighter, reflecting higher values), illustrating that the model is struggling to interpolate and instead produces out-of-distribution representations. On the other hand, T-Rep produces much more plausible representations for these missing timesteps with smoother transitions in and out of the area with missing points, as well as realistic interpolations, matching the data distribution of the surrounding area. Figure 2: Illustration of T-Rep’s robustness to missing data on UCR archive datasets. (a) shows heatmap representations of T-Rep and TS2Vec when faced with missing data, and (b) shows accuracy against percentage of missing data in a classification task for T-Rep, TS2Vec (Yue et al., 2022) and T-Loss (Franceschi et al., 2019). Error bars denote the standard deviation over 6 train-test runs. Secondly, we decided to perform a more *quantitative* experiment, examining classification accuracy for different amounts of missing data, on the ArticularyWordRecognition dataset of the UCR archive (Dau et al., 2019). We compare T-Rep’s performance to TS2Vec (Yue et al., 2022) and T-Loss, a self-supervised representation learning model for time series, specifically designed for downstream classification and clustering tasks (Franceschi et al., 2019). The results are unequivocal, T-Rep’s performance is extremely resilient to missing data, starting at 98% accuracy with the complete dataset, and dropping by only 1.3% when faced with 75% missing data, and finally reaching 86.5% with only 10% of the available data (green curve of Figure 2b). The performance of TS2Vec is also very strong (orange curve), following a similar trend to T-Rep with 2% less accuracy on average, and a more pronounced dip in performance when 90% of the data is missing, dropping to 82.7%. On the other hand, T-Loss is much more sensitive to any missing data. Its performance decreases exponentially to reach 2.6% when presented with 90% missing data (red curve). ### 5.5 Ablation Study To empirically validate T-Rep’s components and pretext tasks, we conduct ablation studies on forecasting and anomaly detection tasks using the ETT datasets for forecasting and the PhysioNet Challenge’s Sepsis dataset for anomaly detection. Unless specified, Time2Vec is the chosen time-embedding method. - **w/o TE-conditioned forecasting** assigns a weight of 0.0 to the ‘time-embedding-conditioned’ forecasting task, redistributing weights evenly. - **w/o TE divergence prediction** behaves similarly, but for the ‘time-embedding divergence prediction’ task. - **w/o New pretext tasks** retains only the time-embedding module and the two TS2Vec pretext tasks, isolating the impact on performance of different time-embedding architectures. We explore a fully-connected two-layer MLP with ReLU activations (w/ MLP TE module) and a vector of RBF features (w/ Radial Basis Features TE module). The original TS2Vec model (w/o TE module) is also included in the ablation study, lacking any of T-Rep elements. | | Forecasting | Anomaly Detection | |----------------------|-------------|-------------------| | T-Rep | 0.986 | 0.665 | | Pretext tasks | | | | w/o TE-conditioned forecasting | 1.022 (+3.7%) | 0.392 (-41%) | | w/o TE divergence prediction | 1.003 (+1.7%) | 0.634 (-4.7%) | | w/o New pretext tasks | 0.999 (+1.3%) | 0.513 (-22.8%) | | Architecture | | | | w/ MLP TE module | 1.008 (+2.2%) | 0.443 (-33.3%) | | w/ Radial Basis Features TE module | 1.007 (+2.1%) | 0.401 (-39.7%) | | w/o TE module (=TS2Vec) | 1.004 (+1.8%) | 0.610 (-8.2%) | Table 4: Ablation study results on ETT forecasting datasets (measured in MSE) and the Sepsis anomaly detection dataset (measured in F1 score). Percentage changes are calculated as the relative difference between a modified model’s performance and T-Rep’s. Results in Table 4 confirm that the proposed pretext tasks and the addition of a time-embedding module to the encoder contribute to T-Rep’s performance: removing any of these decreases the scores in both tasks. These results also illustrate the interdependency of both tasks, as in forecasting, only leaving one of the tasks obtains worse results than removing both pretext tasks. It also justifies our preferred choice of time-embedding, since Time2Vec (Kazemi et al., 2019) outperforms the other 2 architectures in both tasks. ### 6 Conclusion We present T-Rep, a self-supervised method for learning representations of time series at a timestep granularity. T-Rep learns vector embeddings of time alongside its encoder, to extract temporal features such as trend, periodicity, distribution shifts etc. This, alongside pretext tasks which leverage the time-embeddings, allows our model to learn detailed temporal dependencies and capture any periodic or irregularly recurring patterns in the data. We evaluate T-Rep on classification, forecasting, and anomaly detection tasks, where it outperforms existing methods. This highlights the ability of time-embeddings to capture temporal dependencies within time series. Further, we demonstrate T-Rep’s efficiency in missing data regimes, and provide visualisation experiments of the learned embedding space, to highlight the interpretability of our method. REFERENCES Abubakar Abid and James Zou. Autowarp: learning a warping distance from unlabeled time series using sequence autoencoders. *arXiv preprint arXiv:1810.10107*, 2018. Shaojie Bai, J Zico Kolter, and Vladlen Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. *arXiv preprint arXiv:1803.01271*, 2018. Hubert Banville, Omar Chehab, Aapo Hyvärinen, Denis-Alexander Engemann, and Alexandre Gramfort. Uncovering the structure of clinical eeg signals with self-supervised learning. *Journal of Neural Engineering*, 18(4):046020, 2021. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pp. 1597–1607. PMLR, 2020. Joseph Y Cheng, Hanlin Goh, Kaan Dogrusoz, Oncel Tuzel, and Erdrin Azemi. Subject-aware contrastive learning for biosignals. *arXiv preprint arXiv:2007.04871*, 2020. Hoang Anh Dau, Anthony Bagnall, Kaveh Kamgar, Chin-Chia Michael Yeh, Yan Zhu, Shaghayegh Gharghabi, Chotirat Ann Ratanamahatana, and Eamonn Keogh. The ucr time series archive. *IEEE/CAA Journal of Automatica Sinica*, 6(6):1293–1305, 2019. Shohreh Deldari, Daniel V Smith, Hao Xue, and Flora D Salim. Time series change point detection with self-supervised contrastive predictive coding. In *Proceedings of the Web Conference 2021*, pp. 3124–3135, 2021. Angus Dempster, Daniel F Schmidt, and Geoffrey I Webb. MiniRocket: A very fast (almost) deterministic transform for time series classification. In *Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining*, pp. 248–257, New York, 2021. ACM. David A Dickey and Wayne A Fuller. Distribution of the estimators for autoregressive time series with a unit root. *Journal of the American statistical association*, 74(366a):427–431, 1979. Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee Keong Kwoh, Xiaoli Li, and Cuntai Guan. Time-series representation learning via temporal and contextual contrasting. *arXiv preprint arXiv:2106.14112*, 2021. Vincent Fortuin, Matthias Hüser, Francesco Locatello, Heiko Strathmann, and Gunnar Rätsch. Som-vae: Interpretable discrete representation learning on time series. *arXiv preprint arXiv:1806.02199*, 2018. Jean-Yves Franceschi, Aymeric Dieuleveut, and Martin Jaggi. Unsupervised scalable representation learning for multivariate time series. *Advances in neural information processing systems*, 32, 2019. A L Goldberger, L A Amaral, L Glass, J M Hausdorff, P C Ivanov, R G Mark, J E Mietus, G B Moody, C K Peng, and H E Stanley. PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals. *Circulation*, 101(23):E215–20, June 2000. Harish Haresamudram, Apoorva Beedu, Varun Agrawal, Patrick L Grady, Irfan Essa, Judy Hoffman, and Thomas Plötz. Masked reconstruction based self-supervision for human activity recognition. In *Proceedings of the 2020 ACM International Symposium on Wearable Computers*, pp. 45–49, 2020. Seyed Mehran Kazemi, Rishab Goel, Sepehr Eghbali, Janahan Ramanan, Jaspreet Sahota, Sanjay Thakur, Stella Wu, Cathal Smyth, Pascal Poupart, and Marcus Brubaker. Time2vec: Learning a vector representation of time. *arXiv preprint arXiv:1907.05321*, 2019. Eamonn Keogh, Jessica Lin, Ada Waichee Fu, and Helga Van Herle. Finding unusual medical time-series subsequences: Algorithms and applications. *IEEE Transactions on Information Technology in Biomedicine*, 10(3):429–439, 2006.
ZZTkLDRmkg
Furthermore, there is some curiosity regarding whether the results of the baseline methods were adequately compared. It appears from Appendix F that the internal and boundary grids for the baseline methods are distinguished using one-hot encoding. It's worth considering if this method provides the fairest basis for comparison.
BENO: Boundary-Embedded Neural Operators for Elliptic PDEs Haixin Wang\textsuperscript{1,*}, Jiaxin Li\textsuperscript{2,*}, Anubhav Dwivedi\textsuperscript{3}, Kentaro Hara\textsuperscript{3}, Tailin Wu\textsuperscript{2,†} \textsuperscript{1}National Engineering Research Center for Software Engineering, Peking University, \textsuperscript{2}Department of Engineering, Westlake University, \textsuperscript{3}Department of Astronautics and Aeronautics, Stanford University wang.hx@stu.pku.edu.cn, lijiaxin@westlake.edu.cn, \{anubhavd,kenhara\}@stanford.edu,wutailin@westlake.edu.cn Abstract Elliptic partial differential equations (PDEs) are a major class of time-independent PDEs that play a key role in many scientific and engineering domains such as fluid dynamics, plasma physics, and solid mechanics. Recently, neural operators have emerged as a promising technique to solve elliptic PDEs more efficiently by directly mapping the input to solutions. However, existing networks typically cannot handle complex geometries and inhomogeneous boundary values present in the real world. Here we introduce Boundary-Embedded Neural Operators (BENO), a novel neural operator architecture that embeds the complex geometries and inhomogeneous boundary values into the solving of elliptic PDEs. Inspired by classical Green’s function, BENO consists of two branches of Graph Neural Networks (GNNs) for interior source term and boundary values, respectively. Furthermore, a Transformer encoder maps the global boundary geometry into a latent vector which influences each message passing layer of the GNNs. We test our model extensively in elliptic PDEs with various boundary conditions. We show that all existing baseline methods fail to learn the solution operator. In contrast, our model, endowed with boundary-embedded architecture, outperforms state-of-the-art neural operators and strong baselines by an average of 60.96%. Our source code can be found at https://github.com/AI4Science-WestlakeU/beno.git 1 Introduction Partial differential equations (PDEs), which include elliptic, parabolic, and hyperbolic types, play a fundamental role in diverse fields across science and engineering. For all types of PDEs, but especially for elliptic PDEs, the treatment of boundary conditions plays an important role in the solutions. In particular, the Laplace and Poisson equations constitute prime examples of linear elliptic PDEs, which are used in a wide range of disciplines, including solid mechanics (Rivière, 2008), plasma physics (Chen, 2016), and fluid dynamics (Hirsch, 2007). Recently, neural operators have emerged as a promising tool for solving elliptic PDEs by directly mapping input to solutions (Li et al., 2020b,c,a; Lötzsch et al., 2022). Lowering the computation efforts makes neural operators more attractive compared with classical approaches like finite element methods (FEM) (Quarteroni & Valli, 2008) and finite difference methods (FDM) (Dimov et al., 2015). However, existing neural operators have not essentially considered the influence of boundary conditions on solving elliptic PDEs. A distinctive feature of elliptic PDEs is their sensitivity to boundary conditions, which can heavily influence the behavior of solutions. In fact, boundary conditions pose two major challenges for neural operators in terms of inhomogeneous boundary values and complex boundary geometry. First, inhomogeneous boundary conditions can cause severe fluctuations in the solution, and have a distinctive influence on the solution compared to the interior source terms. For example, as shown in Fig. 1, the inhomogeneous boundary... Figure 1: Examples of different geometries for the elliptic PDEs: (a) forcing terms and (b) solutions. The nodes in red-orange color-map represent the complex, inhomogeneous boundary values. The redder the area, the higher the boundary value it represents, whereas the more orange the area, the lower the boundary value. Values cause high-frequency fluctuations in the solution especially near the boundary, which make it extremely hard to learn. Second, since elliptic PDEs are boundary value problems whose solution describes the steady-state of the system, any variation in the boundary geometry and values would influence the interior solution globally [Hirsch, 2007]. The above challenges need to be properly addressed to develop a neural operator suitable for more general and realistic settings. In this paper, we propose Boundary-Embedded Neural Operators (BENO), a novel neural operator architecture to address the above two key challenges. Inspired by classical Green’s function, BENO consists of two Graph Neural Networks (GNNs) that model the boundary influence and the interior source terms, respectively, addressing the first challenge. Moreover, to model the global influence of the boundary to the solution, we employ a Transformer [Vaswani et al., 2017] to encode the full boundary information to a latent vector and feed it to each message passing layer of the GNNs. This captures how the global geometry and values of the boundary influence the pairwise interaction between interior points, addressing the second challenge. As a whole, BENO provides a simple architecture for solving elliptic PDEs with complex boundary conditions, incorporating physics intuition into its boundary-embedded architecture. In Table 1, we provide a comparison between BENO and prior deep learning methods for elliptic PDE solving. | Methods | 1. PDE-agnostic prediction on new initial condition | 2. Train/Test space grid independence | 3. Evaluation at unobserved spatial locations | 4. Free-form spatial domain for boundary shape | 5. Inhomogeneous boundary condition value | |---------|---------------------------------------------------|--------------------------------------|---------------------------------------------|-----------------------------------------------|------------------------------------------| | GKN | ✔ | ✔ | ✔ | ✗ | ✗ | | FNO | ✗ | ✗ | ✗ | ✔ | ✗ | | GNN-PDE | ✔ | ✔ | ✗ | ✔ | ✗ | | MP-PDE | ✗ | ✗ | ✗ | ✔ | ✗ | | BENO (ours) | ✔ | ✔ | ✔ | ✔ | ✔ | To fully evaluate our model on inhomogeneous boundary value problems, we construct a novel dataset encompassing various boundary shapes, different boundary values, different types of boundary conditions, and varying resolutions. The experimental results demonstrate that our approach not only outperforms the existing state-of-the-art methods by about an average of 60.96% in solving elliptic PDEs problems but also exhibits excellent generalization capabilities in other scenarios. In contrast, all existing baselines fail to learn solution operators for the above challenging elliptic PDEs. 2 Problem Setup In this work, we consider the solution of elliptic PDEs in a compact domain subject to inhomogeneous boundary conditions along the domain boundary. Let $u \in C^d(\mathbb{R})$ be a d-dimension-differentiable function of $N$ interior grid nodes over an open domain $\Omega$. Specifically, we consider the Poisson equation with Dirichlet (and Neumann in Appendix K) boundary conditions in a d-dimensional domain, and we consider $d = 2$ in the following experiments: $$\nabla^2 u ([x_1, x_2, \ldots, x_d]) = f ([x_1, x_2, \ldots, x_d]), \quad \forall ([x_1, x_2, \ldots, x_d]) \in \Omega,$$ $$u ([x_1, x_2, \ldots, x_d]) = g ([x_1, x_2, \ldots, x_d]), \quad \forall ([x_1, x_2, \ldots, x_d]) \in \partial \Omega,$$ (1) where \( f \) and \( g \) are sufficiently smooth functions defined on the domain \( \Omega = \{(x_{1,i}, x_{2,i}, \ldots, x_{d,i})\}_{i=1}^{N} \), and boundary \( \partial \Omega \), respectively. Eq. (1) is utilized in a range of applications in science and engineering to describe the equilibrium state, given by \( f \) in the presence of time-independent boundary constraints specified by \( g \). A distinctive feature of elliptic PDEs is their sensitivity to boundary values \( g \) and shape \( \partial \Omega \), which can heavily influence the behavior of their solutions. Appropriate boundary conditions must often be carefully prescribed to ensure well-posedness of elliptic boundary value problems. 3 Method In this section, we detail our method BENO. We first motivate our method using Green’s function, a classical approach to solving elliptic boundary value problems in Section 3.1. We then introduce our graph construction method in Section 3.2. Inspired by the Green’s function, we introduce BENO’s architecture in Section 3.3. 3.1 Motivation How to facilitate boundary-interior interaction? To design the boundary-embedded message passing neural network, we draw inspiration from the traditional Green’s function (Stakgold & Holst, 2011) method which is based on a numerical solution. Take the Poisson equation with Dirichlet boundary conditions for example. Suppose the Green’s function is \( G : \Omega \times \Omega \rightarrow \mathbb{R} \), which is the solution of the corresponding equation as follows: \[ \begin{align*} \nabla^2 G &= \delta(x - x_0)\delta(y - y_0) \\ G|_{\partial \Omega} &= 0 \end{align*} \] Based on the aforementioned equations and the detailed representation of the Green’s function formula in the Appendix A, we can derive the solution in the following form: \[ u(x, y) = \int_{\Omega} G(x, y, x_0, y_0)f(x_0, y_0)d\sigma_0 - \int_{\partial \Omega} g(x_0, y_0)\frac{\partial G(x, y, x_0, y_0)}{\partial n_0}dl_0 \] Motivated by the two terms presented in Eq. (3), our objective is to approach boundary embedding by extending the Green’s function. Following the mainstream work of utilizing GNNs as surrogate models (Pfaff et al., 2020; Eliasof et al., 2021; Lötzsch et al., 2022), we exploit the graph network simulator (Sanchez-Gonzalez et al., 2020) as the backbone to mimic the Green’s function, and add the boundary embedding to the node update in the message passing. Besides, in order to decouple the learning of the boundary and interior, we adopt a dual-branch network structure, where one branch sets the boundary value \( g \) to 0 to only learn the structural information of interior nodes, and the other branch sets the source term \( f \) of interior nodes to 0 to only learn the structural information of the boundary. The Poisson equation solving can then be disentangled into two parts: \[ \begin{align*} \nabla^2 u(x, y) &= f(x, y) \\ u(x, y) &= g(x, y) \end{align*} \] Therefore, our BENO will use a dual-branch design to build two different types of edges on the same graph separately. Branch 1 considers the effects of interior nodes and Branch 2 focuses solely on how to propagate the relationship between boundary values and interior nodes in the graph. Finally, we aggregate them together to obtain a more accurate solution under complex boundary conditions. How to embed boundary? Since boundary conditions are crucially important for solving PDEs, how to better embed the boundary information into the neural network is key to our design. During a pilot study, we found that directly concatenating the interior node information with boundary information fails to solve for elliptic PDEs, and tends to cause severe over-fitting. Therefore, we propose to embed the boundary to represent its global information for further fusion. In recent years, Transformer (Vaswani et al., 2017) has been widely adopted due to its global receptive field. By leveraging its attention mechanism, the Transformer can effectively capture long-range dependencies and interactions within the boundary nodes. This is particularly advantageous when dealing with complex boundary conditions (i.e., irregular shape and inhomogeneous boundary values), as it allows for the modeling of complex relationships between boundary points and the interior solution. 3.2 Graph Construction Figure 2: Visualization of the graph construction on our train/set samples from 5 different corner elliptic datasets. The interior nodes are in black and the boundary one in purple. Before designing our method, it is an important step to construct graph \( G = \{(V, E)\} \) with the finite discrete interior nodes as node set \( V \) on the PDE’s solution domain \( \Omega \). In traditional solution methods such as FEM, the solution domain is initially constructed by triangulating the mesh graph (Bern & Eppstein [1995], Ho-Le [1988]), followed by the subsequent solving process. Therefore, the first step is to implement Delaunay triangulation (Lee & Schachter [1980]) to construct mesh graph with edge set \( E_{mesh} \), in which each cell consists of three edges. Then we proceed to construct the edge set \( E_{kn} \) by selecting the \( K \)-nearest nodes for each individual node. \( K \) is the quantity of neighboring nodes that we deem as closely connected based on the Euclidean distance \( D_{ij} \) between node \( i \) and \( j \). The final edge set is \( E = E_{mesh} \cup E_{kn} \). Examples of graph construction are shown in Fig. 2. 3.3 Overall Architecture In this section, we will introduce the detailed architecture of our proposed BENO, as shown in Figure 3. Our overall neural operator is divided into two branches, with each branch receiving different graph information and boundary data. However, the operator architecture remains the same with the encoder, boundary-embedded message passing neural network and decoder. Therefore, we will only focus on the common operator architecture. 3.3.1 Encoder & Decoder **Encoder.** The encoder computes node and edge embeddings. For each node \( i \), the node encoder \( e^v \) maps the node coordinates \( p_i = (x_i, y_i) \), forcing term \( f_i \), and distances to boundary \( dx_i, dy_i \) to node embedding vector \( v_i = e^v([x_i, y_i, f_i, dx_i, dy_i]) \in R^D \) in a high-dimensional space. The same mapping is implemented on edge attributes with edge encoder \( e^e \) for edge embedding vector \( e_{ij} \). For both node and edge encoders \( e \), we exploit a two-layer Multi-Layer Perceptron (MLP) (Murtagh [1991]) with Sigmoid Linear Unit (SiLU) activation (Elfwing et al. [2018]). **Decoder.** We use a two-layer MLP to map the features to solutions. Considering our dual-branch architecture, we will add the outputs from each decoder to obtain the final predicted solution \( \hat{u} \). 3.3.2 Boundary-Embedded Message Passing Neural Network (BE-MPNN) To address the inherent differences in physical properties between boundary and interior nodes, we opt not to directly merge these distinct sources of information into a single network representation. Instead, we first employ the Transformer to specifically embed the boundary nodes. Then, the obtained boundary information is incorporated into the graph message passing processor. We will provide detailed explanations for these two components separately. **Embedding Boundary with Transformer.** With the boundary node coordinates \( p^B = (x^B, y^B) \), the boundary value \( g \), and the distance to the geometric center of solution domain \( dc \) as input features, we first utilize the position embedding to include relative position relationship for initial representation \( H^B_0 \), followed by a Transformer encoder with \( L \) layers to embed the boundary information \( H^B \). The resulting boundary features, denoted as \( B \), are obtained by applying global average pooling (Lin et al. [2013]) to the encoder outputs \( H^B \). Each self-attention layer applies multi-head self-attention and feed-forward neural networks to the input. The output of the \( i \)-th self-attention layer is denoted as \( H^B_i \). The self-attention mechanism calculates the attention weights \( A_i \) as follows: \[ A_i = \text{Softmax}\left(\frac{Q_i H^B_i (K_i H^B_i)^T}{\sqrt{d_k}}\right) \] (5) Figure 3: Overall architecture of our proposed BENO. The pink branch corresponds to the first term in Eq. (2), and the green branch corresponds to the second term. As the backbone of boundary embedding, Transformer provides boundary information as a supplement for BE-MPNN, thereby enabling better prediction under complex boundary geometry and inhomogeneous boundary values. where $Q_i$, $K_i$, and $V_i$ are linear projections of $H_{i-1}^B$ with learnable weight matrices, and $d_k$ is the dimension of the key vectors. The attention output is computed as: $$H_{i+1}^B = \text{LayerNorm}\left(A_i V_i \left(H_i^B + H_i^B\right)\right)$$ (6) where LayerNorm denotes layer normalization, which helps to mitigate the problem of internal covariate shift. After passing through the $L$ self-attention layers, the output $H^B$ is subject to global average pooling to obtain the boundary features: $B = \text{AvgPool}(H^B)$. **Boundary-Embedded Message Passing Processor.** The processor computes $T$ steps of message passing, with an intermediate graph representation $G_1, \ldots, G_T$ and boundary representation $B_1, \ldots, B_T$. The specific passing message $m_{ij}^t$ in step $t$ in our processor is formed by: $$m_{ij}^t = \text{MLPs}(v_i^t, v_j^t, e_{ij}^t, p_i - p_j)$$ (7) where $m_{ij}^{t+1}$ represents the message sent from node $j$ to $i$. $p_i - p_j$ is the relative position which can enhance the equivariance by justifying the symmetry of the PDEs. Then we update the node feature $v_i^t$ and edge feature $e_{ij}^t$ as follows: $$v_i^{t+1} = \text{MLPs}\left(v_i^t, B_i^t, \sum_{j \in N(i)} m_{ij}^t\right),$$ $$e_{ij}^{t+1} = \text{MLPs}\left(e_{ij}^t, m_{ij}^t\right)$$ (8) (9) Here, boundary information is embedded into the message passing. $N(i)$ represents the gathering of all the neighbors of node $i$. **Learning objective.** Given the ground truth solution $u$ and the predicted solution $\hat{u}$, we minimize the mean squared error (MSE) of the predicted solution on $\Omega$. 4 EXPERIMENTS We aim to answer the following questions: (1) Compared with existing baselines, can BENO learn the solution operator for elliptic PDEs with complex geometry and inhomogeneous boundary values? (2) Can BENO generalize to out-of-distribution boundary geometries and boundary values, and different grid resolutions? (3) Are all components of BENO essential for its performance? We first introduce experiment setup in Sec. [4.1] then answer the above three questions in the following three sections. 4.1 EXPERIMENT SETUP **Datasets.** For elliptic PDEs simulations, we construct five different datasets with inhomogeneous boundary values, including 4/3/2/1-corner squares and squares without corners. Each dataset consists of 1000 samples with randomly initialized boundary shapes and values, with 900 samples used for Table 2: Performances of our proposed BENO and the compared baselines, which are trained on 900 4-corners samples and tested on 5 datasets under relative L2 norm and MAE separately. The unit of the MAE metric is $1 \times 10^{-3}$. Bold fonts indicate the best performance. | Test set | 4-Corners | 3-Corners | 2-Corners | 1-Corner | No-Corner | |----------|-----------|-----------|-----------|----------|-----------| | Metric | L2 | MAE | L2 | MAE | L2 | MAE | L2 | MAE | L2 | MAE | | GKN | 1.1146±0.3936 | 3.6497±1.1874 | 1.0692±0.2034 | 3.7059±0.9543 | 1.0673±0.1393 | 3.6822±0.9819 | 1.1063±0.1905 | 3.4898±0.9469 | 1.0728±0.2074 | 3.9551±0.9791 | | FNO | 1.0947±0.3265 | 2.2707±0.3361 | 1.0742±0.3418 | 2.1657±0.3976 | 1.0672±0.3736 | 2.2617±0.2449 | 1.0921±0.2935 | 2.3922±0.3526 | 1.0762±0.4420 | 2.2281±0.4192 | | GNN-PDE | 1.0026±0.0093 | 3.1410±0.8751 | 1.0009±0.0101 | 3.2812±0.8839 | 1.0015±0.0099 | 3.3557±0.8521 | 1.0002±0.0153 | 3.1421±0.8685 | 1.0011±0.0152 | 3.7561±0.10274 | | MP-PDE | 1.0007±0.0677 | 3.1018±0.8431 | 1.0003±0.0841 | 3.2464±0.8049 | 0.9919±0.0699 | 3.2763±0.8632 | 0.9829±0.07199 | 3.0163±0.8272 | 0.9882±0.0683 | 3.6522±0.8961 | | BENO (ours) | 0.3523±0.1245 | 0.9650±0.3131 | 0.4308±0.1994 | 1.2206±0.4978 | 0.4910±0.1888 | 1.4388±0.5227 | 0.5416±0.2133 | 1.4529±0.4626 | 0.5542±0.1952 | 1.7481±0.5394 | training and validation, and 100 samples for testing. Each sample covers a grid of $32 \times 32$ nodes and 128 boundary nodes. To further assess model performance, higher-resolution versions of each data sample, such as $64 \times 64$, are also provided. Details on data generation are provided in Appendix C. Baselines. We adopt two of the most mainstream series of neural PDE solvers as baselines, one is graph-based, including GKN (Li et al., 2020b), GNN-PDE (Lötzsch et al., 2022), and MP-PDE (Brandstetter et al., 2022); the other is operator-based, including FNO (Li et al., 2020a). For fair comparison and adaption to irregular boundary shapes in our datasets, all of the baselines are re-implemented with the same input as ours, including all the interior and boundary node features. Please refer to Appendix E for re-implementation details. Implementation Details. All experiments are based on PyTorch (Paszke et al., 2019) and PyTorch-Geometric (Fey & Lenssen, 2019) on 2 × NVIDIA A100 GPUs (80G). Following (Brandstetter et al., 2022), we also apply graph message passing neural network as our backbone for all the datasets. We use Adam (Kingma & Ba, 2014) optimizer with a weight decay of $5 \times 10^{-4}$ and a learning rate of $5 \times 10^{-5}$ obtained from grid search for all experiments. The relative L2 error measures the difference between the predicted and the ground truth values, normalized by the magnitude of the ground truth. MAE measures the average absolute difference between the predicted values and the ground truth values. Please refer to Appendix D for more implementation details. 4.2 Main Experimental Results We first test whether our BENO has a strong capability to solve elliptic PDEs with varying shapes. Table 2 and 3 summarize the results for the shape generalization task (more in Appendix H). From the results, we see that recent neural PDE solving methods (i.e., MP-PDE) overall fail to solve elliptic PDEs with inhomogeneous boundary values, not to mention generalizing to datasets with different boundary shapes. This precisely indicates that existing neural solvers are insufficient for solving this type of boundary value problems. In contrast, from Table 2, we see that our proposed BENO trained only on 4-Corners dataset consistently achieves a significant improvement and strong generalization capability over the previous methods by a large margin. More precisely, the improvements of BENO over the best baseline are 55.17%, 52.18%, 52.43%, 47.38%, and 52.94% in terms of relative L2 norm when testing on 4/3/2/1/No-Corner dataset respectively. We attribute the remarkable performance to two factors: (i) BENO comprehensively leverages boundary information, and fuses them with the interior graph message for solving. (ii) BENO integrates dual-branch architecture to fully learn boundary and interior in a decoupled way and thus improves generalized solving performance. Similarly, from Table 3, we see that among mixed corner training results, BENO always achieves the best performance among various compared baselines when varying the test sets, which validates the consistent superiority of our BENO with respect to different boundary shapes. Table 3: Performances of our proposed BENO and the compared baselines, which are trained on 900 mixed samples (180 samples each from 5 datasets) and tested on 5 datasets under relative L2 error and MAE separately. The unit of the MAE metric is $1 \times 10^{-3}$. | Test set | 4-Corners | 3-Corners | 2-Corners | 1-Corner | No-Corner | |----------|-----------|-----------|-----------|----------|-----------| | Metric | L2 | MAE | L2 | MAE | L2 | MAE | L2 | MAE | L2 | MAE | L2 | MAE | | GKN | 1.0588±0.1713 | 3.5051±0.9401 | 1.0651±0.1562 | 3.7061±0.8563 | 1.0386±0.1271 | 3.6043±0.9392 | 1.0734±0.1621 | 3.4048±0.9519 | 1.0423±0.2102 | 3.901±0.9287 | | FNO | 1.0834±0.0462 | 4.6401±0.5327 | 1.0937±0.0625 | 4.6092±0.6713 | 1.0672±0.0376 | 4.5267±0.5581 | 1.0735±0.0528 | 4.5027±0.5371 | 1.0713±0.0489 | 4.5783±0.5565 | | GNN-PDE | 1.0009±0.0036 | 3.1311±0.8664 | 1.0003±0.0039 | 3.2781±0.8858 | 1.0005±0.0038 | 3.3518±0.8520 | 0.9999±0.0042 | 3.1422±0.8609 | 1.0002±0.0041 | 3.7528±1.0284 | | MP-PDE | 1.0063±0.0735 | 3.1238±0.8502 | 1.0045±0.0923 | 3.2537±0.7867 | 0.9957±0.0772 | 3.2864±0.8607 | 0.9822±0.0802 | 3.0177±0.8363 | 0.9912±0.0781 | 3.6658±0.8949 | | BENO (ours) | 0.4487±0.1750 | 1.2150±0.4213 | 0.4783±0.1938 | 1.3509±0.5432 | 0.4737±0.1979 | 1.3516±0.5374 | 0.5168±0.1793 | 1.3728±0.5148 | 0.4665±0.2001 | 1.4213±0.5262 | Table 4: Performances of our BENO and the compared baselines, which are trained on 900 4-Corners samples and tested with zero boundary value samples. The unit of the MAE metric is $1 \times 10^{-3}$. | Test set | 4-Corners | 3-Corners | 2-Corners | 1-Corner | No-Corner | |----------|-----------|-----------|-----------|----------|-----------| | Metric | L2 | MAE | L2 | MAE | L2 | MAE | L2 | MAE | L2 | MAE | | GNN-PDE | 0.7092±0.0584 | 0.1259±0.0755 | 0.7390±0.0483 | 0.2351±0.1013 | 0.7491±0.0485 | 0.3290±0.1371 | 0.7593±0.05269 | 0.4750±0.1582 | 0.7801±0.0371 | 0.6808±0.1692 | | MP-PDE | 0.2598±0.1098 | 0.0459±0.0359 | 0.3148±0.0814 | 0.1066±0.0618 | 0.3729±0.0819 | 0.1778±0.0969 | 0.4634±0.0649 | 0.3049±0.1182 | 0.5458±0.0491 | 0.4924±0.1310 | | BENO (ours) | 0.0908±0.07381 | 0.0142±0.0131 | 0.1031±0.0728 | 0.0288±0.0189 | 0.1652±0.1324 | 0.0583±0.0362 | 0.1783±0.1508 | 0.0862±0.0456 | 0.2441±0.1665 | 0.1622±0.0798 | Additionally, we plot the visualization of the best baseline and our proposed BENO trained on 4-Corners dataset in Figure 4. It can be clearly observed that the predicted solution of BENO is closed to the ground truth, while MP-PDE fails to learn any features of the solution. We observe similar behaviors for all other baselines. Figure 4: Visualization of two samples’ prediction and prediction error from 4-Corners dataset. We render the solution $u$ of the baseline MP-PDE, our BENO and the ground truth in $\Omega$. ### 4.3 Generalization Study #### 4.3.1 Results on Different Boundary Values To investigate the generalization ability on boundary value, we again train the models on 4-Corners dataset with inhomogeneous boundary value but utilize the test set with zero boundary value, which makes the boundary inhomogeneities totally different. Table 4 compares the best baseline and summarizes the results. From the results, we see that BENO has a significant advantage, successfully reducing the L2 norm to around 0.1. In addition, our method outperforms the best baseline by approximately 60.96% in terms of performance improvement. This not only demonstrates BENO’s Table 5: Performances of our BENO and the compared baselines, which are trained on 900 4-Corners 32 × 32 samples and tested with 64 × 64 samples. The unit of the MAE metric is $1 \times 10^{-3}$. | Test set | 4-Corners(64×64) | 3-Corners(64×64) | 2-Corners(64×64) | 1-Corner(64×64) | No-Corner(64×64) | |----------|------------------|------------------|------------------|-----------------|-----------------| | Metric | L2 | MAE | L2 | MAE | L2 | MAE | | MP-PDE | 0.6335± | 0.0596± | 0.7457± | 0.1138± | 0.7926± | 0.1565± | 0.8336± | 0.2445± | 0.8749± | 0.3991± | | | 0.1009 | 0.0418 | 0.0738 | 0.0533 | 0.0505 | 0.0596 | 0.04467 | 0.0915 | 0.0298 | 0.1045 | | BENO (ours) | 0.4596± | 0.0440± | 0.5483± | 0.0860± | 0.6020± | 0.1214± | 0.6684± | 0.1995± | 0.7497± | 0.3424± | | | 0.1094 | 0.0349 | 0.0987 | 0.0466 | 0.0842 | 0.0537 | 0.0794 | 0.0851 | 0.0653 | 0.1000 | Table 6: Ablation study of our BENO. The unit of the MAE metric is $1 \times 10^{-3}$. | Test set | 4-Corners | 3-Corners | 2-Corners | 1-Corner | No-Corner | |----------|-----------|-----------|-----------|----------|-----------| | Metric | L2 | MAE | L2 | MAE | L2 | MAE | | BENO w. M | 1.0130± | 3.1436± | 1.0159± | 3.3041± | 0.9999± | 3.3007± | 1.0026± | 3.0842± | 0.9979± | 3.6832± | | | 0.0858 | 2.8667 | 0.0975 | 0.7906 | 0.0792 | 0.8504 | 0.0840 | 0.8202 | 0.0858 | 0.8970 | | BENO w/o. D | 0.4058± | 1.1175± | 0.4850± | 1.3810± | 0.5273± | 1.5439± | 0.5795± | 1.5683± | 0.5835± | 1.8382± | | | 0.1374 | 0.3660 | 0.2230 | 0.6068 | 0.1750 | 0.4774 | 0.1981 | 0.4670 | 0.2232 | 0.5771 | | BENO w. E | 0.4113± | 1.2020± | 0.4624± | 1.3569± | 0.5347± | 1.5990± | 0.5891± | 1.6222± | 0.5843± | 1.8790± | | | 0.1236 | 0.4048 | 0.2102 | 0.5453 | 0.1985 | 0.5604 | 0.2129 | 0.2016 | 0.2016 | 0.5952 | | BENO w. G | 0.9037± | 2.6795± | 0.8807± | 2.6992± | 0.8928± | 2.8235± | 0.8849± | 2.561± | 0.8721± | 2.9851± | | | 0.1104 | 0.5332 | 0.1298 | 0.6118 | 0.1208 | 0.5892 | 0.1462 | 0.5085 | 0.1569 | 0.5591 | | BENO (ours) | 0.3523± | 0.9650± | 0.4308± | 1.2206± | 0.4910± | 1.4388± | 0.5416± | 1.4529± | 0.5542± | 1.7481± | | | 0.1245 | 0.3131 | 0.1994 | 0.4978 | 0.1888 | 0.5227 | 0.2133 | 0.4626 | 0.1952 | 0.5394 | strong generalization ability regarding boundary values but also provides solid experimental evidence for the successful application of our elliptic PDE solver. 4.3.2 Different Grid Resolutions Data-driven PDE solvers often face limitations in terms of the scale of the training data, making the ability to generalize to higher resolutions a crucial metric. Table 5 provides a summary of our performance in resolution generalization experiments. The model was trained on the 4-Corners homogeneous boundary value dataset with 32 × 32 resolution and tested with 64 × 64 samples not seen in training. The results demonstrate a significant advantage of our method over MP-PDE, with an improvement of approximately 22.46%. We attribute this advantage in generalization to two main factors. Firstly, it stems from the inherent capability of GNNs to process input graphs of various sizes. Secondly, it is due to our incorporation of relative positions as part of the network’s edge features. Consequently, our approach can be deployed on different resolutions using the same setup. 4.4 Ablation Study To investigate the effectiveness of inner components in BENO, we study four variants of BENO. Table 6 shows the effectiveness of our BENO on ablation experiments, which is implemented based on 4-Corners dataset training. Firstly, BENO w. M replaces the BE-MPNN with a vanilla message passing neural network (Gilmer et al., 2017) and merely keeps the interior node feature. Secondly, BENO w/o. D removes the dual-branch structure of BENO and merely utilizes a single Encoder-BE-MPNN-Decoder procedure. Thirdly, BENO w. E adds the Transformer output for edge message passing. Finally, BENO w. G replaces the Transformer architecture with a vanilla graph convolution network (Kipf & Welling, 2016). From the results we can draw conclusions as follows. Firstly, BENO w. M performs significantly worse than ours, which indicates the importance of fusing interior and boundary in BENO. Secondly, comparing the results of BENO w/o. D with ours we can conclude that decoupled learning of the interior and boundary proves to be effective. Thirdly, comparing the results of BENO w. E and ours, we can find that boundary information only helps in node-level message passing. In other words, it is not particularly suitable to directly inject the global information of the boundary into the edges. Finally, comparing results of BENO w. G with ours validates the design of Transformer for boundary embedding is crucial. 5 RELATED WORK 5.1 CLASSIC ELLIPTIC PDE SOLVERS The classical numerical solution of elliptic PDEs approximates the domain $\Omega$ and its boundary $\partial \Omega$ in Eq. 1 using a finite number of non-overlapping partitions. The solution to Eq. 1 is then approximated over these partitions. A variety of strategies are available for computing this discrete solution. Popular approaches include finite volume method (FVM) (Hirsch, 2007), finite element method (FEM) (Hughes, 2012), and finite difference method (FDM) (LeVeque, 2007). In the present work we utilize the FVM to generate the dataset which can easily accommodate complex boundary shapes. This approach partitions the domains into cells, and the boundary is specified using cell interfaces. After numerically approximating the operator $\nabla^2$ over these cells, the numerical solution is obtained on the centers of the cells constituting our domain. Further details are provided in Appendix B. 5.2 GNN FOR PDE SOLVER GNNs are initially applied in physics-based simulations on solids and fluids represented by particles (Sanchez-Gonzalez et al., 2018). Recently, an important advancement MeshGraphNets (Pfaff et al., 2020) emerge to learn mesh-based simulations. Subsequently, several variations have been proposed, including techniques for accelerating finer-level simulations by utilizing GNNs (Belbute-Peres et al., 2020; Yang & Hong, 2022), combining GNNs with Physics-Informed Neural Networks (PINNs) (Gao et al., 2022), solving inverse problems with GNNs and autodecoder-style priors (Zhao et al., 2022), and handling temporal distribution shift (Luo et al., 2023). However, the research focus on addressing boundary issues is limited. T-FEN (Lienen & Günnemann, 2022), FEONet (Lee et al., 2023), VQGraph (Yang et al., 2024) and GNN-PDE (Lötzsch et al., 2022) are pioneering efforts in this regard, encompassing complex domains and various boundary shapes. Nevertheless, the boundary values are still set to zero, which does not account for the presence of inhomogeneous boundary values. This discrepancy is precisely the problem that we aim to address. 5.3 NEURAL OPERATOR AS PDE SOLVER Neural operators map from initial/boundary conditions to solutions through supervised learning in a mesh-invariant manner. Prominent examples of neural operators include the Fourier neural operator (FNO) (Li et al., 2020a), graph neural operator (Li et al., 2020b), and DeepONet (Lu et al., 2019). Neural operators exhibit invariance to discretization, making them highly suitable for solving PDEs. Moreover, neural operators enable the learning of operator mappings between infinite-dimensional function spaces. Subsequently, further variations have been proposed, including techniques for solving arbitrary geometries PDEs with both the computation efficiency and the flexibility (Li et al., 2022), enabling deeper stacks of Fourier layers by independently applying transformations (Tran et al., 2021), utilizing Fourier layers as a replacement for spatial self-attention (Guibas et al., 2021), facilitating boundary condition satisfaction in neural operators by implementing structural modifications to the operator kernel (Saad et al., 2022) and incorporating symmetries in the physical domain using group theory (Helwig et al., 2023). Gupta et al. (2021, 2022; Xiao et al., 2023) continuously improve the design of the operator by introducing novel methods for numerical computation. 6 CONCLUSION In this work, we have proposed Boundary-Embedded Neural Operators (BENO), a neural operator architecture to address the challenges posed by inhomogeneous boundary conditions with complex boundary geometry in solving elliptic PDEs. Our approach BENO incorporates physics intuition through a boundary-embedded architecture, consisting of GNNs and a Transformer, to model the influence of boundary conditions on the solution. By constructing a diverse dataset with various boundary shapes, values, and resolutions, we have demonstrated the effectiveness of our approach in outperforming existing state-of-the-art methods by an average of 60.96% in solving elliptic PDE problems. Furthermore, our method BENO exhibits strong generalization capabilities across different scenarios. The development of BENO opens up new possibilities for efficiently and accurately solving elliptic PDEs with complex boundary conditions, making them more useful to various scientific and engineering fields. ACKNOWLEDGEMENT We gratefully acknowledge the support of Westlake University Research Center for Industries of the Future, and Westlake University Center for High-performance Computing. REFERENCES Filipe De Avila Belbute-Peres, Thomas Economon, and Zico Kolter. Combining differentiable pde solvers and graph neural networks for fluid flow prediction. In *International Conference on Machine Learning*, pp. 2402–2411. PMLR, 2020. Marshall Bern and David Eppstein. Mesh generation and optimal triangulation. In *Computing in Euclidean geometry*, pp. 47–123. World Scientific, 1995. Johannes Brandstetter, Daniel Worrall, and Max Welling. Message passing neural pde solvers. *arXiv preprint arXiv:2202.03376*, 2022. Francis F Chen. *Introduction to Plasma Physics and Controlled Fusion (3rd Ed.)*. Springer, 2016. Ivan Dimov, István Faragó, and Lubin Vulkov. *Finite difference methods, theory and applications*. Springer, 2015. Stefan Elfwing, Eiji Uchibe, and Kenji Doya. Sigmoid-weighted linear units for neural network function approximation in reinforcement learning. *Neural networks*, 107:3–11, 2018. Moshe Eliasof, Eldad Haber, and Eran Treister. Pde-gcn: Novel architectures for graph neural networks motivated by partial differential equations. *Advances in neural information processing systems*, 34:3836–3849, 2021. Matthias Fey and Jan Eric Lenssen. Fast Graph Representation Learning with PyTorch Geometric, 2019. Han Gao, Matthew J Zahr, and Jian-Xun Wang. Physics-informed graph neural galerkin networks: A unified framework for solving pde-governed forward and inverse problems. *Computer Methods in Applied Mechanics and Engineering*, 390:114502, 2022. Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In *International conference on machine learning*, pp. 1263–1272. PMLR, 2017. John Guibas, Morteza Mardani, Zongyi Li, Andrew Tao, Anima Anandkumar, and Bryan Catanzaro. Adaptive fourier neural operators: Efficient token mixers for transformers. *arXiv preprint arXiv:2111.13587*, 2021. Gaurav Gupta, Xiongye Xiao, and Paul Bogdan. Multiwavelet-based operator learning for differential equations. *Advances in neural information processing systems*, 34:24048–24062, 2021. Gaurav Gupta, Xiongye Xiao, Radu Balan, and Paul Bogdan. Non-linear operator approximations for initial value problems. In *International Conference on Learning Representations*, 2022. Jacob Helwig, Xuan Zhang, Cong Fu, Jerry Kurtin, Stephan Wojtowytsch, and Shuiwang Ji. Group equivariant fourier neural operators for partial differential equations. *arXiv preprint arXiv:2306.05697*, 2023. C. Hirsch. *Numerical computation of internal and external flows: The fundamentals of computational fluid dynamics*. Elsevier, 2007. K Ho-Le. Finite element mesh generation methods: a review and classification. *Computer-aided design*, 20(1):27–38, 1988. T. J. R. Hughes. *The finite element method: linear static and dynamic finite element analysis*. Courier Corporation, 2012.
ikwEDva1JZ
What happens when the representation function is of a different form? If either the transformer does not have enough layers or the width, is there an approximate representation function learned on which regression is performed, or does the entire mechanism fall apart?
How Do Transformers Learn In-Context Beyond Simple Functions? A Case Study on Learning with Representations Tianyu Guo¹ Wei Hu² Song Mei¹ Huan Wang³ Caiming Xiong³ Silvio Savarese³ Yu Bai³ ¹UC Berkeley ²University of Michigan ³Salesforce AI Research tianyu_guo@berkeley.edu Abstract While large language models based on the transformer architecture have demonstrated remarkable in-context learning (ICL) capabilities, understandings of such capabilities are still in an early stage, where existing theory and mechanistic understanding focus mostly on simple scenarios such as learning simple function classes. This paper takes initial steps on understanding ICL in more complex scenarios, by studying learning with representations. Concretely, we construct synthetic in-context learning problems with a compositional structure, where the label depends on the input through a possibly complex but fixed representation function (which we instantiate as multi-layer MLPs), composed with a linear function that differs in each instance. By construction, the optimal ICL algorithm first transforms the inputs by the representation function, and then performs linear ICL on top of the transformed dataset. We show theoretically the existence of transformers that approximately implement such algorithms with mild depth and size. Empirically, we find trained transformers consistently achieve near-optimal ICL performance in this setting, and exhibit the desired dissection where lower layers transform the dataset and upper layers perform linear ICL. Through extensive probing and a new pasting experiment, we further reveal several mechanisms within the trained transformers, such as concrete copying behaviors on both the inputs and the representations, linear ICL capability of the upper layers alone, and a post-ICL representation selection mechanism in a harder mixture setting. These observed mechanisms align well with our theory and may shed light on how transformers perform ICL in more realistic scenarios. 1 Introduction Large language models based on the transformer architecture have demonstrated remarkable in-context learning (ICL) capabilities (Brown et al., 2020), where they can solve newly encountered tasks when prompted with only a few training examples, without any parameter update to the model. Recent state-of-the-art models further achieve impressive performance in context on sophisticated real-world tasks (OpenAI, 2023; Bubeck et al., 2023; Touvron et al., 2023). Such remarkable capabilities call for better understandings, which recent work tackles from various angles (Xie et al., 2021; Chan et al., 2022; Razeghi et al., 2022; Min et al., 2022; Olsson et al., 2022; Wei et al., 2023). A recent surge of work investigates ICL in a theoretically amenable setting where the context consists of real-valued (input, label) pairs generated from a certain function class. They find that transformers can learn many function classes in context, such as linear functions, shallow neural networks, and decision trees (Garg et al., 2022; Akyürek et al., 2022; Li et al., 2023a), and further studies provide theoretical justification on how transformers can implement and learn various learning algorithms in-context such as ridge regression (Akyürek et al., 2022), gradient descent (von Oswald et al., 2022; Dai et al., 2022; Zhang et al., 2023a; Ahn et al., 2023), algorithm selection (Bai et al., 2023), and Bayes model averaging (Zhang et al., 2023b), to name a few. Despite the progress, an insufficiency of this line is that the settings and results may not actually resemble ICL in real-world scenarios—for example, ICL in linear function classes are well understood in theory with efficient transformer constructions (Bai et al., 2023), and transformers indeed learn them well empirically (Garg et al., 2022); however, such linear functions in the raw input may fail to capture real-world scenarios where prior knowledge can often aid learning. This paper takes initial steps towards addressing this by studying ICL in the setting of learning with representations, a more complex and perhaps more realistic setting than existing ones. We construct synthetic ICL tasks where labels depend on inputs through a fixed representation function composed with a varying linear function. We instantiate the representation as shallow neural networks (MLPs), and consider both a supervised learning setting (with input-label pairs) and a dynamical systems setting (with inputs only) for the in-context data. Our contributions can be summarized as follows. - Theoretically, we construct transformers that implement in-context ridge regression on the representations (which includes the Bayes-optimal algorithm) for both learning settings (Section 4). Our transformer constructions admit mild sizes, and can predict at every token using a decoder architecture, (non-trivially) generalizing existing efficient constructions that predict at the last token only using an encoder architecture. - Empirically, using $L$-layer MLPs as representations, we find that trained small transformers consistently achieve near-optimal ICL risk in both learning settings (Section 5 & Figure 1b). - Using linear probing techniques, we identify evidence for various mechanisms in the trained transformers. Our high-level finding is that the lower layers transforms the data by the representation and prepares it into a certain format, and the upper layers perform linear ICL on top of the transformed data (Figure 1c), with often a clear dissection between these two modules, consistent with our theory. See Figure 1a for a pictorial illustration. - We further observe several lower-level behaviors using linear probes that align well with our (and existing) theoretical constructions, such as copying (of both the input and the representations) where which tokens are being copied are precisely identifiable (Section 5.2), and a post-ICL representation selection mechanism in a harder setting (Section 5.1.1 & Appendix E). - We perform a new pasting experiment and find that the upper layers within the trained transformer can perform nearly-optimal linear ICL in (nearly-)isolation (Section 5.1), which provides stronger evidence that the upper module alone can be a strong linear ICL learner. 2 RELATED WORK In-context learning The in-context learning (ICL) capabilities of pretrained transformers have gained significant attention since first demonstrated with GPT-3 (Brown et al., 2020). Subsequent empirical studies have investigated the capabilities and limitations of ICL in large language models (Liu et al., 2021; Min et al., 2021a;b; Lu et al., 2021; Zhao et al., 2021; Rubin et al., 2021; Razeghi et al., 2022; Elhage et al., 2021; Kirsch et al., 2022; Wei et al., 2023). A line of recent work investigates why and how pretrained transformers perform ICL from a theoretical perspective (Garg et al., 2022; Li et al., 2023a; von Oswald et al., 2022; Akyürek et al., 2022; Xie et al., 2021; Bai et al., 2023; Zhang et al., 2023a;b; Ahn et al., 2023; Raventós et al., 2023). In particular, Xie et al. (2021) proposed a Bayesian inference framework explaining ICL. Garg et al. (2022) showed transformers could be trained from scratch for ICL of simple function classes. Other studies found transformers can implement ICL through in-context gradient descent. (von Oswald et al., 2022; Akyürek et al., 2022) and in-context algorithm selection (Bai et al., 2023). Zhang et al. (2023a) studied the training dynamics of a single attention layer on linear ICL tasks. Li et al. (2023b) used the ICL framework to explain chain-of-thought reasoning (Wei et al., 2022). Our work builds on and extends the work of (Garg et al., 2022; Akyürek et al., 2022; von Oswald et al., 2022; Bai et al., 2023), where we study the more challenging setting of ICL with a representation function, and also provide new efficient ICL constructions for predicting at every token using a decoder transformer, as opposed to predicting only at the last token in most of these work. **In-weights learning versus in-context learning** Recent work has investigated when transformers learn a fixed input-label mapping versus when they perform ICL (Chan et al., 2022; Wei et al., 2023; Bietti et al., 2023). Chan et al. (2022) refer to learning a fixed input-label mapping from the pre-training data as “in-weights learning” (IWL), in contrast with ICL. Our problem setting assumes the pre-training data admits a fixed representation function, which should be learned by IWL. In this perspective, unlike these existing works where IWL and ICL are typically treated as competing mechanisms, we study a model in which IWL (computing the fixed representation by transformer weights) and ICL (learning the changing linear function in context) occur simultaneously. **Mechanistic understanding and probing techniques** A line of work focuses on developing techniques for understanding the mechanisms of neural networks, in particular transformers (Alain & Bengio, 2016; Geiger et al., 2021; Meng et al., 2022; von Oswald et al., 2022; Akyürek et al., 2022; Wang et al., 2022; Räuker et al., 2023). We adopted the linear probing technique of (Alain & Bengio, 2016) in a token-wise fashion for interpreting the ICL mechanisms of transformers. Beyond probing, more convincing mechanistic interpretations may require advanced approaches such as causal intervention (Geiger et al., 2021; Vig et al., 2020; Wang et al., 2022); Our pasting experiment has a similar interventional flavor in that we feed input sequences (ICL instances) from another distribution directly (through a trainable embedding layer) to the upper module of a transformer. ### 3 Preliminaries **Transformers** We consider sequence-to-sequence functions applied to $N$ input vectors $\{h_i\}_{i=1}^N \subset \mathbb{R}^{D_{\text{hid}}}$ in $D_{\text{hid}}$ dimensions, which we write compactly as an input matrix $H = [h_1, \ldots, h_N] \in \mathbb{R}^{D_{\text{hid}} \times N}$, where each $h_i$ is a column of $H$ (also a *token*). We use a standard $L$-layer decoder-only (autoregressive) transformer, which consists of $L$ consecutive blocks each with a masked self-attention layer (henceforth “attention layer”) followed by an MLP layer. Each attention layer computes $$\text{Attn}_\theta(H) := H + \sum_{m=1}^M (V_m H) \times \overline{\sigma}(\text{MSK} \odot ((Q_m H)^T (K_m H))) \in \mathbb{R}^{D \times N},$$ where $\theta = \{(Q_m, K_m, V_m) \subset \mathbb{R}^{D_{\text{hid}} \times D_{\text{hid}}}\}_{m \in [M]}$ are the (query, key, value) matrices, $M$ is the number of heads, $\text{MSK} \in \mathbb{R}^{N \times N}$ is the decoder mask matrix with $\text{MSK}_{ij} = 1\{i \leq j\}$, and $\overline{\sigma}$ is the activation function which is typically chosen as the (column-wise) softmax: $[\overline{\sigma}(A)]_{j} = \text{softmax}(a_j) \in \mathbb{R}^N$ for $A = [a_1, \ldots, a_N] \in \mathbb{R}^{N \times N}$. Each MLP layer computes $$\text{MLP}_{W_1, W_2}(H) := H + W_2 \sigma(W_1 H),$$ where $W_{\{1,2\}} \in \mathbb{R}^{D_{\text{hid}} \times D_{\text{hid}}}$ are the weight matrices, and $\sigma(t) = \max\{t, 0\}$ is the ReLU activation. We use $\text{TF}$ to denote a transformer, and typically use $\tilde{H} = \text{TF}(H)$ to denote its output on $H$. **In-context learning** We consider in-context learning (ICL) on regression problems, where each ICL instance is specified by a dataset $D = \{(x_i, y_i)\}_{i \in [N]} \overset{\text{iid}}{\sim} P$, with $(x_i, y_i) \in \mathbb{R}^d \times \mathbb{R}$, and the model is required to accurately predict $y_i$ given all past observations $D_{i-1} := \{(x_j, y_j)\}_{j \leq i-1}$ and the test input $x_i$. Each instance $D = D^{(j)}$ is drawn from a different data distribution $P = P^{(j)}$. Accurate prediction requires learning $P$ in-context from the past observations $D_{i-1}$ (i.e. the context); merely memorizing any fixed $P^{(j)}$ is not enough. This is a main challenge of in-context learning. We consider using transformers to do ICL, where we feed a sequence of length $2N$ into the transformer TF using the following input format: $$H = [h_1, \ldots, h_{2N}] = \begin{bmatrix} x_1 & 0 & \cdots & x_N & 0 \\ 0 & y_1 & \cdots & 0 & y_N \\ p^x_1 & p^y_1 & \cdots & p^x_N & p^y_N \end{bmatrix} \in \mathbb{R}^{D_{\text{hid}} \times 2N},$$ where $p^x_i, p^y_i \in \mathbb{R}^{D_{\text{hid}} - d - 1}$ are fixed positional encoding vectors consisting of zero paddings, followed by non-zero entries containing information about the position index $i$ and indicator of being an $x$-token (1 in $p^x_i$, and 0 in $p^y_i$); see (12) for our concrete choice. We refer to each odd token $h_{2i-1}$ as an $x$-token (also the $x_i$-token), and each even token $h_{2i}$ as a $y$-token (also the $y_i$-token). After obtaining the transformer output $\tilde{H} = \text{TF}(H)$, for every index $i \in [N]$, we extract the prediction $\hat{y}_i$ from the output token at position $x_i$: $\hat{y}_i := (\tilde{H}_i)_d + 1$. Feeding input (1) into the transformer simultaneously computes $\hat{y}_i \leftarrow \text{TF}(x_1, y_1, \ldots, x_{i-1}, y_{i-1}, x_i)$ for all $i \in [N]$. Denote the parameters of transformers as $\theta$. In addition to the above setting, we also consider a dynamical system setting with $D = \{x_i\}_{i \in [N]}$ where the transformer predicts $\hat{x}_i$ from the preceding inputs $x_{<i}$. See Section 4.2 for details. ## 4 IN-CONTEXT LEARNING WITH REPRESENTATIONS ### 4.1 Supervised learning with representation We begin by considering ICL on regression problems with representation, where labels depend on the input through linear functions of a fixed representation function. Formally, let $\Phi^* : \mathbb{R}^d \to \mathbb{R}^D$ be a fixed representation function. We generate each in-context data distribution $P = P_w$ by sampling a linear function $w \sim N(0, \tau^2 I_D)$ from a Gaussian prior, and then generate the ICL instance $D = \{(x_i, y_i)\}_{i \in [N]} \sim P_w$ by a linear model on $\Phi^*$ with coefficient $w$ and noise level $\sigma > 0$: $$y_i = \langle w, \Phi^*(x_i) \rangle + \sigma z_i, \quad x_i \overset{\text{iid}}{\sim} P_x, \quad z_i \overset{\text{iid}}{\sim} N(0, 1), \quad i \in [N].$$ Note that all $D$'s share the same representation $\Phi^*$, but each admits a unique linear function $w$. The representation function $\Phi^*$ can in principle be chosen arbitrarily. As a canonical and flexible choice for both our theory and experiments, we choose $\Phi^*$ to be a standard $L$-layer MLP: $$\Phi^*(x) = \sigma^*(B^*_L \sigma^*(B^*_{L-1} \cdots \sigma^*(B^*_1 x) \cdots)), \quad B^*_1 \in \mathbb{R}^{D \times d}, (B^*_\ell)_{\ell=2}^L \subset \mathbb{R}^{D \times D}$$ where $D$ is the hidden and output dimension, and $\sigma^*$ is the activation function (applied entry-wise) which we choose to be the leaky ReLU $\sigma^*(t) = \sigma_\rho(t) := \max\{t, \rho t\}$ with slope $\rho \in (0, 1)$. **Theory** As $\Phi^*$ is fixed and the $w$ is changing in model (2), by construction, a good ICL algorithm should compute the representations $\{\Phi^*(x_i)\}_i$ and perform linear ICL on the transformed dataset $\{\Phi^*(x_i), y_i\}_i$ to learn $w$. We consider the following class of $\Phi^*$-ridge estimators: $$\hat{w}^{\Phi^*, \lambda}_i := \arg\min_{w \in \mathbb{R}^d} \frac{1}{2(\tau^2 - 1)} \sum_{j=1}^{i-1} (\langle w, \Phi^*(x_j) \rangle - y_j)^2 + \frac{\lambda}{2} \|w\|_2^2,$$ and we understand $\hat{w}^{\Phi^*, \lambda}_1 := 0$. In words, $\hat{w}^{\Phi^*, \lambda}_i$ performs ridge regression on the transformed dataset $\{\Phi(x_j), y_j\}_{j<i-1}$ for all $i \in [N]$. By standard calculations, the Bayes-optimal predictor for $y_i$ given $(D_{i-1}, x_i)$ is exactly the ridge predictor $\hat{y}^{\Phi^*, \lambda}_i := \langle \hat{w}^{\Phi^*, \lambda}_i, \Phi^*(x_i) \rangle$ at $\lambda = \sigma^2/\tau^2$. We show that there exists a transformer that can approximately implement ($\Phi^*$-Ridge) in-context at every token $i \in [N]$. The proof can be found in Appendix B. **Theorem 1** (Transformer can implement $\Phi^*$-Ridge). For any representation function $\Phi^*$ of form (3), any $\lambda > 0$, $B_\Phi, B_w, B_y > 0$, $\varepsilon < B_\Phi B_w/2$, letting $\kappa := 1 + B_\Phi^2/\lambda$, there exists a transformer TF with $L + O(\kappa \log(B_\Phi B_w/\varepsilon))$ layers, 5 heads, $D_{\text{hid}} = 2D + d + 10$ such that the following holds. For any dataset $D$ such that $\|\Phi^*(x_i)\|_2 \leq B_\Phi$, $|y_i| \leq B_y$ and the corresponding input $H \in \mathbb{R}^{D_{\text{hid}} \times 2N}$ of format (1), we have 1. There is no information leakage, as the “prefix” property of decoder transformers $\tilde{H}_i = \tilde{H}_{2i-1} = [\text{TF}(H_{1:(2i-1)})]_{2i-1}$ ensures that $\tilde{H}_i$ (and thus $\hat{y}_i$) only depends on $(D_{i-1}, x_i)$. 2. The predictor $\hat{y}_i = \hat{y}_i(D_{i-1}, x_i)$ that minimizes the posterior square loss $\mathbb{E}[\frac{1}{2}(\hat{y}_i - y_i)^2 | D_{i-1}, x_i]$. (a) The first \((L + 2)\) layers of TF transforms \(x_i\) to the representation \(\Phi^*(x_i)\) at each \(x\) token, and copies them into the succeeding \(y\) token: \[ \text{TF}^{(1:L+2)}(H) = \begin{bmatrix} \Phi^*(x_1) & \Phi^*(x_1) & \cdots & \Phi^*(x_N) & \Phi^*(x_N) \\ 0 & y_1 & \cdots & 0 & y_N \\ \tilde{p}_1^x & \tilde{p}_1^y & \cdots & \tilde{p}_N^x & \tilde{p}_N^y \end{bmatrix}, \] where \(\tilde{p}_i^x, \tilde{p}_i^y\) only differ from \(p_i^x, p_i^y\) in the dimension of the zero paddings. (b) For every index \(i \in [N]\), the transformer output \(\tilde{H} = \text{TF}(H)\) contains prediction \(\hat{y}_i := [\tilde{h}_{2i-1}]_{D+1}\) that is close to the \((\Phi^* \text{-Ridge})\) predictor: \(|\hat{y}_i - (\Phi^*(x_i), \hat{w}_i^\Phi, \lambda)| \leq \varepsilon\). The transformer construction in Theorem 1 consists of two “modules”: The lower layers compute the representations and prepares the transformed dataset \(\{\Phi^*(x_i), y_i\}\) into form (4). In particular, each \(\Phi^*(x_i)\) appears both in the \(i\)-th \(x\)-token and is also copied into the succeeding \(y\) token. The upper layers perform linear ICL (ridge regression) on top of the transformed dataset. We will test whether such mechanisms align with trained transformers in reality in our experiments (Section 5.1). Proof techniques The proof of Theorem 1 builds upon (1) implementing the MLP \(\Phi^*\) by transformers (Lemma B.3), and (2) an efficient construction of in-context ridge regression (Theorem B.5), which to our knowledge is the first efficient construction for predicting at every token using decoder transformers. The latter requires several new construction techniques such as a copying layer (Lemma B.1), and an efficient implementation of \(N\) parallel in-context gradient descent algorithms at all tokens simultaneously using a decoder transformer (Proposition B.4). These extend the related constructions of von Oswald et al. (2022); Bai et al. (2023) who only consider predicting at the last token using encoder transformer, and could be of independent interest. In addition, the bounds on the number of layers, heads, and \(D_{\text{hid}}\) in Theorem 1 can imply a sample complexity guarantee for (pre-)training: A transformer with \(\varepsilon\)-excess risk (on the same ICL instance distribution) over the one constructed in Theorem 1 can be found in \(\tilde{O}((L + \kappa)^2(D + d)^2\varepsilon^{-2})\) training instances, by the generalization analysis of Bai et al. (2023, Theorem 20). We remark that the constructions in Theorem 1 & 2 choose \(\sigma\) as the normalized ReLU instead of softmax, following (Bai et al., 2023) and in resonance with recent empirical studies (Wortsman et al., 2023). 4.2 Dynamical system with representation As a variant of model (2), we additionally consider a (nonlinear) dynamical system setting with data \(D = (x_1, \ldots, x_N)\), where each \(x_{i+1}\) depends on the \(k\) preceding inputs \([x_{i-k+1}; \ldots; x_i]\) for some \(k \geq 1\) through a linear function on top of a fixed representation function \(\Phi^*\). Compared to the supervised learning setting in Section 4.1, this setting better resembles some aspects of natural language, where the next token in general depends on several preceding tokens. Formally, let \(k \geq 1\) denote the number of input tokens that the next token depends on, and \(\Phi^* : \mathbb{R}^{kd} \to \mathbb{R}^D\) denotes a representation function. Each ICL instance \(D = \{x_i\}_{i \in [N]}\) is generated as follows: First sample \(P = P_W\) where \(W \in \mathbb{R}^{D \times d}\) is sampled from a Gaussian prior: \(W_{ij} \overset{\text{iid}}{\sim} N(0, \tau^2)\). Then sample the initial input \(x_1 \sim P_x\) and let \[ x_{i+1} = W^\top \Phi^*([x_{i-k+1}; \ldots; x_i]) + \sigma z_i, \quad z_i \overset{\text{iid}}{\sim} N(0, I_d), \quad i \in [N - 1], \] where we understand \(x_j := 0_d\) for \(j \leq 0\). We choose \(\Phi^*\) to be the same \(L\)-layer MLP as in (3), except that the first weight matrix has size \(B_1^* \in \mathbb{R}^{D \times kd}\) to be consistent with the dimension of the augmented input \(\bar{x}_i := [x_{i-k+1}; \ldots; x_i]\). We remark that (5) substantially generalizes the setting of Li et al. (2023a) which only considers linear dynamical systems (equivalent to \(\Phi^* \equiv \text{id}\)), a task arguably much easier for transformers to learn in context. As \(x_i\) acts as both inputs and labels in model (5), we use the following input format for transformers: \[ H := \begin{bmatrix} x_1 & \cdots & x_N \\ p_1 & \cdots & p_N \end{bmatrix} \in \mathbb{R}^{D_{\text{hid}} \times N}, \] where \(p_i := [0_{D_{\text{hid}}-d-4}; 1; i; i^2; i^3]\), and we extract prediction \(\hat{x}_{i+1}\) from the \(i\)-th output token. Theory Similar as above, we consider the ridge predictor for the dynamical system setting \[ \hat{W}_{i}^{\Phi^*, \lambda} := \arg\min_{W \in \mathbb{R}^{D \times d}} \frac{1}{2(t-1)} \sum_{j=1}^{i-1} \| W^\top \Phi^*(\bar{x}_j) - x_{j+1} \|_2^2 + \frac{\lambda}{2} \| W \|_{F,r}^2. \quad (\Phi^*-\text{Ridge-Dyn}) \] We understand \( \hat{W}_{0}^{\Phi^*, \lambda} := 0_{D \times d} \), and let \( \| W \|_{2,\infty} := \max_{j \in [d]} \| W_{:,j} \|_2 \) for any \( W \in \mathbb{R}^{D \times d} \). Again, \( (\Phi^*-\text{Ridge-Dyn}) \) gives the Bayes-optimal predictor \( (\hat{W}_{i}^{\Phi^*, \lambda})^\top \Phi^*(\bar{x}_t) \) at \( \lambda = \sigma^2/\tau^2 \). The following result shows that \( (\Phi^*-\text{Ridge-Dyn}) \) can also be implemented efficiently by a transformer. The proof can be found in Appendix C.2. **Theorem 2** (Transformer can implement \( \Phi^* \)-Ridge for dynamical system). For the dynamical system setting where the \( L \)-layer representation function \( \Phi^* : \mathbb{R}^{kd} \to \mathbb{R}^D \) takes form (3), but otherwise same settings as Theorem 1, there exists a transformer TF with \( L + 2 + O(\kappa \log(B_\Phi B_w/\varepsilon)) \) layers, max \( \{3d, 5\} \) heads, and \( D_{\text{hid}} = \max \{2(k+1), D\}d + 3(D+d) + 5 \) such that the following holds. For any dataset \( \mathcal{D} \) such that \( \| \Phi^*(\bar{x}_i) \|_2 \leq B_\Phi \), \( \| x_i \|_\infty \leq B_y \), and \( \| \hat{W}_{i}^{\Phi^*, \lambda} \|_{2,\infty} \leq B_w/2 \) (cf. \( (\Phi^*-\text{Ridge-Dyn}) \)) for all \( i \in [N] \), and corresponding input \( H \in \mathbb{R}^{D_{\text{hid}} \times N} \) of format (6), we have (a) The first transformer layer copies the \( k \) previous inputs into the current token, and computes the first layer \( \{\sigma_\rho(B_i^1 \bar{x}_i)\}_{i \in [N]} \) within \( \Phi^* \): \[ \text{Attn}^{(1)}(H) = \begin{bmatrix} \bar{x}_1 & \cdots & \bar{x}_N \\ \bar{p}_1 & \cdots & \bar{p}_N \end{bmatrix} = \begin{bmatrix} x_{1-k+1} & \cdots & x_{N-k+1} \\ x_1 & \cdots & x_N \\ \bar{p}_1 & \cdots & \bar{p}_N \end{bmatrix}; \] \[ \text{TF}^{(1)}(H) = \text{MLP}^{(1)}\left(\text{Attn}^{(1)}(H)\right) = \begin{bmatrix} \sigma_\rho(B_i^1 \bar{x}_1) & \cdots & \sigma_\rho(B_i^1 \bar{x}_N) \\ x_1 & \cdots & x_N \\ \bar{p}_1 & \cdots & \bar{p}_N \end{bmatrix}. \] (b) The first \( (L + 1) \) layers of TF transforms each \( x_i \) to \( \Phi^*(\bar{x}_i) \), and copies the preceding representation \( \Phi^*(\bar{x}_{i-1}) \) onto the same token to form the (input, label) pair \( (\Phi^*(\bar{x}_{i-1}), x_i) \): \[ \text{TF}^{(1:L+1)}(H) = \begin{bmatrix} \Phi^*(\bar{x}_1) & \Phi^*(\bar{x}_2) & \cdots & \Phi^*(\bar{x}_N) \\ 0_d & 0_d & \cdots & 0_d \\ 0_D & \Phi^*(\bar{x}_1) & \cdots & \Phi^*(\bar{x}_{N-1}) \\ x_1 & x_2 & \cdots & x_N \\ \bar{p}_1 & \bar{p}_2 & \cdots & \bar{p}_N \end{bmatrix}. \] Above, \( \bar{p}_i, \bar{p}_i', \bar{p}_i'' \) only differs from \( p_i \) in the dimension of the zero paddings. (c) For every index \( i \in [N] \), the transformer output \( \tilde{H} = \text{TF}(H) \) contains prediction \( \hat{x}_{i+1} := [\tilde{h}_i]_{1:d} \) that is close to the \( (\Phi^*-\text{Ridge-Dyn}) \) predictor: \( \| \hat{x}_{i+1} - (\hat{W}_{i}^{\Phi^*, \lambda})^\top \Phi^*(\bar{x}_t) \|_\infty \leq \varepsilon \). To our best knowledge, Theorem 2 provides the first transformer construction for learning nonlinear dynamical systems in context. Similar as for Theorem 1, the bounds on the transformer size here imply guarantees \( \varepsilon \) excess risk within \( \tilde{O}((L + \kappa)^2((k + D)d)^2\varepsilon^{-2}) \) (pre-)training instances. In terms of the mechanisms, compared with Theorem 1, the main differences in Theorem 2 are (1) the additional copying step (7) within the first layer, where the previous \( (k - 1) \) tokens \( [x_{i-k+1}; \cdots; x_{i-1}] \) are copied onto the \( x_i \) token, to prepare for computing of \( \Phi^*(\bar{x}_i) \); (2) the intermediate output (9), where relevant information (for preparing for linear ICL) has form \( [\Phi^*(\bar{x}_{i-1}); x_i; \Phi^*(\bar{x}_i)] \) and is gathered in the \( x \)-tokens, different from (4) where the relevant information is \( [\Phi^*(x_i); y_i] \), gathered in the \( y \)-token. We will test these in our experiments (Section 5.2). 5 EXPERIMENTS We now empirically investigate trained transformers under the two settings considered in Section 4.1 & 4.2. In both cases, we choose the representation function \( \Phi^* \) to be a normalized version of the \( L \)-layer MLP (3): \( \Phi^*(x) := \tilde{\Phi}^*(x)/\|\tilde{\Phi}^*(x)\|_2 \), where \( \tilde{\Phi}^* \) takes form (3), with weight matrices \( (B_i^*)_{i \in [L]} \) sampled as random (column/row)-orthogonal matrices and held fixed in each experiment, and slope $\rho = 0.01$. We test $L \in \{1, 2, 3, 4\}$, hidden dimension $D \in \{5, 20, 80\}$, and noise level $\sigma \in \{0, 0.1, 0.5\}$. All experiments use $P_x = N(0, I_d)$, $\tau^2 = 1$, $d = 20$, and $N = 41$. We use a small architecture within the GPT-2 family with 12 layers, 8 heads, and $D_{\text{hid}} = 256$, following (Garg et al., 2022; Li et al., 2023a; Bai et al., 2023). The (pre)-training objective for the transformer (for the supervised learning setting) is the average prediction risk at all tokens: $$\min_\theta \mathbb{E}_{w,D \sim P_w} \left[ \frac{1}{2N} \sum_{i=1}^{N} (\hat{y}_{\theta,i}(D_{i-1}, x_i) - y_i)^2 \right],$$ where $\hat{y}_{\theta,i}$ is extracted from the $(2i - 1)$-th output token of $\text{TF}_\theta(H)$ (cf. Section 3). The objective for the dynamical system setting is defined similarly. Additional experimental details can be found in Appendix D, and ablation studies (e.g. along the training trajectory; cf. Figure 9) in Appendix F. ### 5.1 Supervised Learning with Representation We first test ICL with supervised learning data as in Section 4.1, where for each configuration of $(L, D, \sigma)$ (which induces a $\Phi^*$) we train a transformer on ICL data distribution (2) and evaluate ICL on the same distribution. Note that Figure 1c & 1b plots the results for $(L, D, \sigma) = (2, 20, 0.1)$. **ICL performance** Figure 2 reports the test risk across various settings, where we observe that trained transformers can consistently match the Bayes-optimal ridge predictor. This extends existing results which show that linear functions (without a representation) can be learned near-optimally in-context by transformers (Garg et al., 2022; Akyürek et al., 2022), adding our model (2) to this list of (empirically) nearly-optimally learnable function classes. Among the complexity measures $(L, D, \sigma)$, observe that the noise level $\sigma$ and hidden dimension $D$ of the representation (Figure 2a & 2b) appears to have a larger effect on the (nearly Bayes-optimal) risk than the depth $L$ (Figure 2c). **Mechanisms via linear probing** We conduct probing experiments to further understand the mechanisms of the trained transformers. In accordance with the theoretical construction in Theorem 1, our main question here is: Does the trained transformer perform the following in order: 1. Computes $\Phi^*(x_i)$ at $x_i$ tokens; 2. Copies them onto the following $y_i$ token and obtains dataset $\{\Phi^*(x_i), y_i\}_i$ in the form of (4); 3. Performs linear ICL on top of $\{\Phi^*(x_i), y_i\}_i$? Figure 4: (a) Illustration of our pasting experiment, which examines the linear ICL capability of the upper module of a trained transformer. (b) Pasting results for the upper module of a trained transformer in setting \((L, D, \sigma) = (3, 20, 0.1)\). “TF_upper+...” correspond to feeding the upper module of trained transformer with different embeddings. It achieves nearly optimal linear ICL risk (in 20 dimension with noise 0.1), using a 1-layer transformer embedding, and also non-trivial performance using linear and linear copy embeddings. While such internal mechanisms are in general difficult to quantify exactly, we adapt the linear probing (Alain & Bengio, 2016) technique to the transformer setting to identify evidence. Linear probing allows us to test whether intermediate layer outputs (tokens) \(\{h_{x_i}^{\ell}\}_{\ell \in [12]}\) (\(\ell\) denotes the layer) and \(\{h_{y_i}^{\ell}\}_{\ell \in [12]}\) “contains” various quantities of interest, by linearly regressing these quantities (as the \(y\)) on the intermediate tokens (as the \(x\)), pooled over the token index \(i \in [N]\). For example, regressing \(\Phi^*(x_i)\) on \(h_{x_i}^{\ell}\) tests whether the \(x_i\) token after the \(\ell\)-th layer “contains” \(\Phi^*(x_i)\), where a smaller error indicates a better containment. See Appendix D.1 for further setups of linear probing. Figure 3 reports the errors of three linear probes across all 12 layers: The representation \(\Phi^*(x_i)\) in the \(x_i\) tokens and \(y_i\) tokens, and the optimal ridge prediction \(y_i^* - \Phi^*\lambda\) in the \(x_i\) tokens. Observe that the probing errors for the representation decrease through lower layers and then increase through upper layers (Figure 3a & 3b), whereas probing errors for the ridge prediction monotonically decrease through the layers (Figure 3c), aligning with our construction that the transformer first computes the representations and then performs ICL on top of the representation. Also note that deeper representations take more layers to compute (Figure 3a). Further, the representation shows up later in the \(y\)-tokens (layers 5-6) than in the \(x\)-tokens (layers 1,3,4,5), consistent with the copying mechanism, albeit the copying appears to be lossy (probe errors are higher at \(y\)-tokens). Finally, observe that the separation between the lower and upper modules seems to be strong in certain runs—for example, the red transformer \((L = 4, \sigma = 0.1)\) computes the representation at layer 5, copies them onto \(y\)-tokens at layer 6, and starts to perform iterative ICL from layer 7, which aligns fairly well with our theoretical constructions at a high level. Investigating upper module via pasting To further investigate upper module, we test whether it is indeed a strong ICL learner on its own without relying on the lower module, which would provide stronger evidence that the upper module performs linear ICL. However, a key challenge here is that it is unclear how to feed raw inputs directly into the upper module, as they supposedly only admit input formats emitted from the lower module—the part we wanted to exclude in the first place. We address this by conducting a pasting experiment, where we feed \(D\)-dimensional linear ICL problems \((y_i' = \langle w', x_i'\rangle\) without a representation) with input format (1) directly to the upper module of the transformer trained on representation \(\Phi^*\), by adding a trainable embedding layer in between; see Figure 4a for an illustration of the pasting approach. This trainable embedding layer itself needs to be shallow without much ICL power—we test the following three choices: (1) Linear embedding: \(h_{x_i} = W[x_i; 0]\) and \(h_{y_i} = W[0; y_i]\); (2) Linear-copy embedding, where the \(y\) tokens are instead \(h_{y_i} = W[x_i; y_i]\), motivated by the format (4); (3) One-layer transformer embedding \(TF\), which computes \(H = TF(H)\). See Appendix D.2 for further setups of pasting. Figure 4b shows the pasting results on a trained transformer on \((L, D, \sigma) = (3, 20, 0.1)\) (an ablation in Figure 10b), where we dissect the lower and upper modules at layer 4 as suggested by the probing curve (Figure 3a green). Perhaps surprisingly, the upper module of the transformer can indeed perform nearly optimal linear ICL without representation when we use the one-layer transformer. embedding. Note that a (freshly trained) single-layer transformer itself performs badly, achieving about the trivial test risk 1.01, which is expected due to our specific input format\(^3\) (1). This suggests that the majority of the ICL is indeed carried by the upper module, with the one-layer transformer embedding not doing much ICL itself. Also note that the linear-copy and linear embeddings also yield reasonable (though suboptimal) performance, with linear-copy performing slightly better. ### 5.1.1 Extension: Mixture of Multiple Representations We additionally investigate an harder scenario in which there exists *multiple possible representation functions* \((\Phi^*_j)_{j \in [K]}\), and the ICL data distribution is a mixture of the \(K\) distributions of form (2) each induced by \(\Phi^*_j\) (equivalent to using the concatenated representation \(\Phi^* = [\Phi^*_1; \ldots; \Phi^*_K]\) with a group 1-sparse prior on \(w \in \mathbb{R}^{KD}\)). We find that transformers still approach Bayes-optimal risks, though less so compared with the single-representation setting. Using linear probes, we find that transformers sometimes implement the *post-ICL algorithm selection* mechanism identified in Bai et al. (2023), depending on the setting. Details are deferred to Appendix E due to the space limit. ### 5.2 Dynamical Systems We now study the dynamical systems setting in Section 4.2 using the same approaches as in Section 5.1. Figure 5a shows that transformers can still consistently achieve nearly Bayes-optimal ICL risk. An ablation of the risks and probing errors in alternative settings can be found in Appendix F.2. #### Probing copying mechanisms The main mechanistic question we ask here is about the data preparation phase, where the transformer construction in Theorem 2 performs copying *twice*: i) A copying of \([x_{i-k+1}; \ldots; x_{i-1}]\) onto the \(x_i\) token as in (7), to prepare for the computation of \(\Phi^*(\bar{x}_i)\); As copying may not be distinguishable from the consequent matrix multiplication step \([x_{i-k+1}; \ldots; x_{i-1}; x_i] \mapsto B_1^*[x_{i-k+1}; \ldots; x_{i-1}; x_i]\), we probe instead the result \(B^*_{1,-j} x_{i-j}\) after matrix multiplication, where \(B^*_{1,-j} \in \mathbb{R}^{D \times d}\) denotes the block within \(B_1^*\) hitting \(x_{i-j}\). ii) A second copying of \(\Phi^*(\bar{x}_{i-1})\) onto the \(x_i\) token to obtain (9), after \(\{\Phi^*(\bar{x}_i)\}_i\) are computed. We probe one transformer trained on the dynamical systems problem with \(k = 3\) (so that the useful preceding inputs are \(x_{i-1}\) and \(x_{i-2}\)), and find that the transformer indeed performs the two conjectured copyings. Figure 5b demonstrates copying i) onto the current token, where the copying of \(x_{i-1}\) happens earlier (at layer 3) and is slightly more accurate than that of \(x_{i-2}\) (at layer 4), as expected. Further observe that layer 4 (which we recall contains an attention layer and an MLP layer) have seemingly also implemented the (unnormalized) MLP representation \(\tilde{\Phi}^*(\bar{x}_i) = \sigma_p(B^*_{2} \sigma_p(B_1^* \bar{x}_i))\), though the probing error for the actual representation \(\Phi^*(\bar{x}_i) = \tilde{\Phi}^*(\bar{x}_i)/\|\tilde{\Phi}^*(\bar{x}_i)\|_2\) continues to drop in layer 4-6 (Figure 5c). Figure 5c further demonstrates copying ii), where \(\Phi^*(\bar{x}_{i-1})\) are indeed copied to the \(i\)-th token, whereas by sharp contrast \(\Phi^*(\bar{x}_{i-k})\) for \(k > 2\) are *not* copied at all into the \(x_i\) token, aligning with our conjectured intermediate output format (9). --- \(^3\)A one-layer transformer does not have much ICL power using input format (1)—\(x_i\) and \(y_i\) are stored in separate tokens there, which makes “one-layer” mechanisms such as gradient descent (von Oswald et al., 2022; Akyürek et al., 2022; Bai et al., 2023) unlikely to be implementable; see Appendix D.3 for a discussion. 6 CONCLUSION This paper presents theoretical and mechanistic studies on the in-context learning ability of transformers on learning tasks involving representation functions, where we give efficient transformer constructions for linear ICL on top of representations for the supervised learning and dynamical system setting, and empirically confirm the existence of various high-level mechanisms in trained transformers. We believe our work opens up the investigation of ICL beyond simple function classes, and suggests open questions such as further investigations of the mechanisms of the linear ICL modules, and theory for ICL in more complex function classes. One limitation of our work is that the setting still consists of synthetic data with idealistic representation functions; performing similar studies on more real-world data would be an important direction for future work. ACKNOWLEDGMENT WH acknowledges support from the Google Research Scholar program. S. Mei is supported by NSF DMS-2210827, CCF-2315725, NSF CAREER DMS-2339904, and an Amazon Research Award. REFERENCES Kwangjun Ahn, Xiang Cheng, Hadi Daneshmand, and Suvrit Sra. Transformers learn to implement preconditioned gradient descent for in-context learning. *arXiv preprint arXiv:2306.00297*, 2023. Ekin Akyürek, Dale Schuurmans, Jacob Andreas, Tengyu Ma, and Denny Zhou. What learning algorithm is in-context learning? investigations with linear models. *arXiv preprint arXiv:2211.15661*, 2022. Guillaume Alain and Yoshua Bengio. Understanding intermediate layers using linear classifier probes. *arXiv preprint arXiv:1610.01644*, 2016. Yu Bai, Fan Chen, Huan Wang, Caiming Xiong, and Song Mei. Transformers as statisticians: Provable in-context learning with in-context algorithm selection. *arXiv preprint arXiv:2306.04637*, 2023. Alberto Bietti, Vivien Cabannes, Diane Bouchacourt, Herve Jegou, and Leon Bottou. Birth of a transformer: A memory viewpoint. *arXiv preprint arXiv:2306.00802*, 2023. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Sébastien Bubeck. Convex optimization: Algorithms and complexity. *Foundations and Trends® in Machine Learning*, 8(3-4):231–357, 2015. Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. *arXiv preprint arXiv:2303.12712*, 2023. Stephanie Chan, Adam Santoro, Andrew Lampinen, Jane Wang, Aaditya Singh, Pierre Richemond, James McClelland, and Felix Hill. Data distributional properties drive emergent in-context learning in transformers. *Advances in Neural Information Processing Systems*, 35:18878–18891, 2022. Damai Dai, Yutao Sun, Li Dong, Yaru Hao, Zhifang Sui, and Furu Wei. Why can gpt learn in-context? language models secretly perform gradient descent as meta optimizers. *arXiv preprint arXiv:2212.10559*, 2022. N Elhage, N Nanda, C Olsson, T Henighan, N Joseph, B Mann, A Askell, Y Bai, A Chen, T Conerly, et al. A mathematical framework for transformer circuits. *Transformer Circuits Thread*, 2021. Shivam Garg, Dimitris Tsipras, Percy S Liang, and Gregory Valiant. What can transformers learn in-context? a case study of simple function classes. *Advances in Neural Information Processing Systems*, 35:30583–30598, 2022.
s25i99RTCg
However, in a scenario where inference is desired based solely on modality A to predict B, would a masked C still be necessitated? This raises questions about the model's flexibility and its adaptability to accommodate various generative scenarios with different modalities. The capacity to dynamically adjust to these conditions without compromising the integrity of the generative process is pivotal.
ABSTRACT Multi-modal data-sets are ubiquitous in modern applications, and multi-modal Variational Autoencoders are a popular family of models that aim to learn a joint representation of the different modalities. However, existing approaches suffer from a coherence–quality tradeoff, where models with good generation quality lack generative coherence across modalities, and vice versa. We discuss the limitations underlying the unsatisfactory performance of existing methods, to motivate the need for a different approach. We propose a novel method that uses a set of independently trained, uni-modal, deterministic autoencoders. Individual latent variables are concatenated into a common latent space, which is fed to a masked diffusion model to enable generative modeling. We also introduce a new multi-time training method to learn the conditional score network for multi-modal diffusion. Our methodology substantially outperforms competitors in both generation quality and coherence, as shown through an extensive experimental campaign. 1 INTRODUCTION Multi-modal generative modelling is a crucial area of research in machine learning that aims to develop models capable of generating data according to multiple modalities, such as images, text, audio, and more. This is important because real-world observations are often captured in various forms, and combining multiple modalities describing the same information can be an invaluable asset. For instance, images and text can provide complementary information in describing an object, audio and video can capture different aspects of a scene. Multi-modal generative models can also help in tasks such as data augmentation (He et al., 2023; Azizi et al., 2023; Sariyildiz et al., 2023), missing modality imputation (Antelmi et al., 2019; Da Silva–Filarder et al., 2021; Zhang et al., 2023; Tran et al., 2017), and conditional generation (Huang et al., 2022; Lee et al., 2019b). Multi-modal models have flourished over the past years and have seen a tremendous interest from academia and industry, especially in the content creation sector. Whereas most recent approaches focus on specialization, by considering text as primary input to be associated mainly to images (Rombach et al., 2022; Saharia et al., 2022; Ramesh et al., 2022; Tao et al., 2022; Wu et al., 2022; Nichol et al., 2022; Chang et al., 2023) and videos (Blattmann et al., 2023; Hong et al., 2023; Singer et al., 2022), in this work we target an established literature whose scope is more general, and in which all modalities are considered equally important. A large body of work rely on extensions of the Variational Autoencoder (VAE) (Kingma & Welling, 2014) to the multi-modal domain; initially interested in learning joint latent representation of multi-modal data, such works have mostly focused on generative modeling. Multi-modal generative models aim at high-quality data generation, as well as generative coherence across all modalities. These objectives apply to both joint generation of new data, and to conditional generation of missing modalities, given a disjoint set of available modalities. In short, multi-modal VAEs rely on combinations of uni-modal VAEs, and the design space consists mainly in the way the uni-modal latent variables are combined, to construct the joint posterior distribution. Early work such as Wu & Goodman (2018) adopt a product of experts approach, whereas others Shi et al. (2019) consider a mixture of expert approach. Product-based models achieve high generative quality, but suffer in terms of both joint and conditional coherence. This was found to be due to experts mis-calibration issues (Shi et al., 2019; Sutter et al., 2021). On the other hand, mixture-based models produce coherent but qualitatively poor samples. A first attempt to address the so called coherence-quality tradeoff (Daunhawer et al., 2022) is represented by the mixture of product of experts approach (Sutter et al., 2021). However recent comparative studies (Daunhawer et al., 2022) show that none of the existing approaches fulfill both the generative quality and coherence criteria. A variety of techniques aim at finding a better operating point, such as contrastive learning techniques (Shi et al., 2021), hierarchical schemes (Vasco et al., 2022), total correlation based calibration of single modality encoders (Hwang et al., 2021), or different training objectives (Sutter et al., 2020). More recently, the work in (Palumbo et al., 2023) considers explicitly separated shared and private latent spaces to overcome the aforementioned limitations. By expanding on results presented in (Daunhauer et al., 2022), in Section 2 we further investigate the tradeoff between generative coherence and quality, and argue that it is intrinsic to all variants of multi-modal VAEs. We indicate two root causes of such problem: latent variable collapse (Alemi et al., 2018; Dieng et al., 2019) and information loss due to mixture sub-sampling. To tackle these issues, in this work, we propose in Section 3 a new approach which uses a set of independent, uni-modal deterministic auto-encoders whose latent variables are simply concatenated in a joint latent variable. Joint and conditional generative capabilities are provided by an additional model that learns a probability density associated to the joint latent variable. We propose an extension of score-based diffusion models (Song et al., 2021b) to operate on the multi-modal latent space. We thus derive both forward and backward dynamics that are compatible with the multi-modal nature of the latent data. In section 4, we propose a novel method to train the multi-modal score network, such that it can both be used for joint and conditional generation. Our approach is based on a guidance mechanism, which we compare to alternatives. We label our approach Multi-modal Latent Diffusion (MLD). Our experimental evaluation of MLD in Section 5 provides compelling evidence of the superiority of our approach for multi-modal generative modeling. We compare MLD to a large variety of VAE-based alternatives, on several real-life multi-modal data-sets, in terms of generative quality and both joint and conditional coherence. Our model outperforms alternatives in all possible scenarios, even those that are notoriously difficult because modalities might be only loosely correlated. Note that recent work also explore the joint generation of multiple modalities (Ruan et al., 2023; Hu et al., 2023), but such approaches are application specific, e.g. text-to-image, and essentially only target two modalities. When relevant, we compare our method to additional recent alternatives to multi-modal diffusion (Bao et al., 2023; Wesego & Rooshenas, 2023), and show superior performance of MLD. 2 LIMITATIONS OF MULTI-MODAL VAEs In this work, we consider multi-modal VAEs (Wu & Goodman, 2018; Shi et al., 2019; Sutter et al., 2021; Palumbo et al., 2023) as the standard modeling approach to tackle both joint and conditional generation of multiple modalities. Our goal here is to motivate the need to go beyond such a standard approach, to overcome limitations that affect multi-modal VAEs, which result in a trade-off between generation quality and generative coherence (Daunhauer et al., 2022; Palumbo et al., 2023). Consider the random variable $X = \{X^1, \ldots, X^M\} \sim p_D(x^1, \ldots, x^M)$, consisting in the set of $M$ of modalities sampled from the (unknown) multi-modal data distribution $p_D$. We indicate the marginal distribution of a single modality by $X^i \sim p_D(x^i)$ and the collection of a generic subset of modalities by $X^A \sim p_D(x^A)$, with $X^A \overset{\text{def}}{=} \{X^i\}_{i \in A}$, where $A \subset \{1, \ldots, M\}$ is a set of indexes. For example: given $A = \{1, 3, 5\}$, then $X^A = \{X^1, X^3, X^5\}$. We begin by considering uni-modal VAEs as particular instances of the Markov chain $X \rightarrow Z \rightarrow \hat{X}$, where $Z$ is a latent variable and $\hat{X}$ is the generated variable. Models are specified by the two conditional distributions, called the encoder $Z | X=x \sim q_\psi(z | x)$, and the decoder $\hat{X} | Z=z \sim p_\theta(\hat{x} | z)$. Given a prior distribution $p_n(z)$, the objective is to define a generative model whose samples are distributed as closely as possible to the original data. In the case of multi-modal VAEs, we consider the general family of Mixture of Product of Experts (MOPOE) (Sutter et al., 2021), which includes as particular cases many existing variants, such as Product of Experts (MVAE) (Wu & Goodman, 2018) and Mixture of Expert (MMVAE) (Shi et al., 2019). Formally, a collection of $K$ arbitrary subsets of modalities $S = \{A_1, \ldots, A_K\}$, along with weighting coefficients $\omega_i \geq 0$, $\sum_{i=1}^{K} \omega_i = 1$, define the posterior $q_\psi(z | x) = \sum_i \omega_i q_{\psi_i}(z | x^{A_i})$, with $\psi = \{\psi^1, \ldots, \psi^K\}$. To lighten the notation, we use $q_{\psi_i}$ instead of $q_{\psi_i}^{A_i}$ noting that the various $q_{\psi_i}^{A_i}$ can have both different parameters $\psi^{A_i}$ and functional form. For example, in the MOPOE (Sutter et al., 2021) parametrization, we have: $q_{\psi_i}^{A_i}(z | x^{A_i}) = \prod_{j \in A_i} q_{\psi_j}(z | x^j)$. Our exposition is more general and not limited to this assumption. The selection of the posterior can be understood as the result induced by the two step procedure where i) each subset of modalities $A_i$ is encoded into specific latent variables $Y_i \sim q_{\psi,A_i}(\cdot | x^{A_i})$ and ii) the latent variable $Z$ is obtained as $Z = Y_i$ with probability $\omega_i$. Optimization is performed w.r.t. the following evidence lower bound (ELBO) (Daunhauer et al., 2022; Sutter et al., 2021): $$L = \sum_i \omega_i \int p_D(x)q_{\psi,A_i}(z | x^{A_i}) \log p_\theta(x | z) - \log \frac{q_{\psi,A_i}(z | x^{A_i})}{p_n(z)} dz dx.$$ (1) A well-known limitation called the latent collapse problem (Alemi et al., 2018; Dieng et al., 2019) affects the quality of latent variables $Z$. Consider the hypothetical case of arbitrary flexible encoders and decoders: then, posteriors with zero mutual information with respect to model inputs are valid maximizers of Equation (1). To prove this, it is sufficient to substitute the posteriors $q_{\psi,A_i}(z | x^{A_i}) = p_n(z)$ and $p_\theta(x | z) = p_D(x)$ into the Equation (1) to observe that the optimal value $L = \int p_D(x) \log p_D(x) dx$ is achieved (Alemi et al., 2018; Dieng et al., 2019). The problem of information loss is exacerbated in the case of multi-modal VAEs (Daunhauer et al., 2022). Intuitively, even if the encoders $q_{\psi,A_i}(z | x^{A_i})$ carry relevant information about their inputs $X^{A_i}$, step ii) of the multi-modal encoding procedure described above induces a further information bottleneck. A fraction $\omega_i$ of the time, the latent variable $Z$ will be a copy of $Y_i$, that only provides information about the subset $X^{A_i}$. No matter how good the encoding step is, the information about $X^{\{1,\ldots,M\}\setminus A_i}$ that is not contained in $X^{A_i}$ cannot be retrieved. Furthermore, if the latent variable carries zero mutual information w.r.t. the multi-modal input, a coherent conditional generation of a set of modalities given others is impossible, since $\tilde{X}^{A_1} \perp X^{A_2}$ for any generic sets $A_1, A_2$. While the factorization $p_\theta(x | z) = \prod_{i=1}^M p_{\theta_i}(x^i | z), \theta = \{\theta_1, \ldots, \theta_M\}$ — where we use $p_{\theta_i}$ instead of $p_{\theta}$, to unclutter the notation — could enforce preservation of information and guarantee a better quality of the jointly generated data, in practice, the latent collapse phenomenon induces multi-modal VAEs to converge toward sub-optimal operating regime. When the posterior $q_\psi(z | x)$ collapses onto the uninformative prior $p_n(z)$, the ELBO in Equation (1) reduces to the sum of modality independent reconstruction terms $\sum_i \omega_i \sum_{j \in A_i} \int p_D(x^j)p_n(z) (\log p_{\theta_j}(x^j | z)) dz dx^j$. In this case, flexible decoders can similarly ignore the latent variable and converge to the solution $p_{\theta_j}(x^j | z) = p_D(x^j)$ where, paradoxically, the quality of the approximation of the various marginal distributions is extremely high, while there is a complete lack of joint coherence. General principles to avoid latent collapse consist in explicitly forcing the learning of informative encoders $q_\theta(z | x)$ via $\beta-$annealing of the Kullback-Leibler (KL) term in the ELBO and the reduction of the representational power of encoders and decoders. While $\beta-$annealing has been explored in the literature (Wu & Goodman, 2018) with limited improvements, reducing the flexibility of encoders/decoders clearly impacts the generation quality. Hence the presence of a trade-off: to improve coherence, the flexibility of encoders/decoders should be constrained, which in turns hurt generative quality. This trade-off has been recently addressed in the literature of multi-modal VAEs (Daunhauer et al., 2022; Palumbo et al., 2023), but our experimental results in Section 5 indicate that there is ample room for improvement, and that a new approach is truly needed. 3 Our Approach: Multi-modal Latent Diffusion We propose a new method for multi-modal generative modeling that, by design, does not suffer from the limitations discussed in Section 2. Our objective is to enable both high-quality and coherent joint/conditional data generation, using a simple design (see Appendix A for a schematic representation). As an overview, we use deterministic uni-modal autoencoders, whereby each modality $X^i$ is encoded through its encoder $e_{\psi_i}$, which is a short form for $e_{\psi_i}$, into the modality specific latent variable $Z^i$ and decoded into the corresponding $\hat{X}^i = d_{\phi_i}(Z^i)$. Our approach can be interpreted as a latent variable model where the different latent variables $Z^i$ are concatenated as $Z = [Z^1, \ldots, Z^M]$. This corresponds to the parametrization of the two conditional distributions as $q_\psi(z | x) = \prod_{i=1}^M \delta(z^i - e_{\psi_i}(x^i))$ and $p_\theta(\hat{x} | z) = \prod_{i=1}^M \delta(\hat{x}^i - d_{\phi_i}(z^i))$, respectively. Then, in place of an ELBO, we optimize the parameters of our autoencoders by minimizing the following sum of modality specific losses: \[ L = \sum_{i=1}^{M} L_i, \quad L_i = \int p_D(x) l^i(x - d_{\theta^i}(e_{\psi^i}(x))) dx, \] where \( l^i \) can be any valid distance function, e.g., the square norm \( \| \cdot \|_2^2 \). Parameters \( \psi^i, \theta^i \) are modality specific; then, minimization of Equation (2) corresponds to individual training of the different autoencoders. Since the mapping from input to latent is deterministic, there is no loss of information between \( X \) and \( Z \). Moreover, this choice avoids any form of interference in the back-propagated gradients corresponding to the uni-modal reconstruction losses. Consequently gradient conflicts issues [Javaloy et al., 2022], where stronger modalities pollute weaker ones, are avoided. To enable such a simple design to become a generative model, it is sufficient to generate samples from the induced latent distribution \( Z \sim q_\psi(z) = \int p_D(x) q_\psi(z | x) dx \) and decode them as \( \hat{X} = d_\theta(Z) = [d_{\theta^1}(Z^1), \ldots, d_{\theta^M}(Z^M)] \). To obtain such samples, we follow the two-stage procedure described in [Loaiza-Ganem et al., 2022; Tran et al., 2021], where samples from the lower dimensional \( q_\psi(z) \) are obtained through an appropriate generative model. We consider score-based diffusion models in latent space [Rombach et al., 2022; Vahdat et al., 2021] to solve this task, and call our approach Multi-modal Latent Diffusion (MLD). It may be helpful to clarify, at this point, that the two-stage training of MLD is carried out separately. Uni-modal deterministic autoencoders are pre-trained first, followed by the training of the score-based diffusion model, which is explained in more detail later. To conclude the overview of our method, for joint data generation, one can sample from noise, perform backward diffusion, and then decode the generated multi-modal latent variable to obtain the corresponding data samples. For conditional data generation, given one modality, the reverse diffusion is guided by this modality, while the other modalities are generated by sampling from noise. The generated latent variable is then decoded to obtain data samples of the missing modality. ### 3.1 Joint and Conditional Multi-modal Latent Diffusion Processes In the first stage of our method, the deterministic encoders project the input modalities \( X^i \) into the corresponding latent spaces \( Z^i \). This transformation induces a distribution \( q_\psi(z) \) for the latent variable \( Z = [Z^1, \ldots, Z^M] \), resulting from the concatenation of uni-modal latent variables. **Joint generation.** To generate a new sample for all modalities we use a simple score-based diffusion model in latent space [Sohl-Dickstein et al., 2015; Song et al., 2021b; Vahdat et al., 2021; Loaiza-Ganem et al., 2022; Tran et al., 2021]. This requires reversing a stochastic noising process, starting from a simple, Gaussian distribution. Formally, the noising process is defined by a Stochastic Differential Equation (SDE) of the form: \[ dR_t = \alpha(t) R_t dt + g(t) dW_t, \quad R_0 \sim q(r, 0), \] where \( \alpha(t) \) and \( g(t) \) are the drift and diffusion terms, respectively, and \( W_t \) is a Wiener process. The time-varying probability density \( q(r, t) \) of the stochastic process at time \( t \in [0, T] \), where \( T \) is finite, satisfies the Fokker-Planck equation [Oksendal, 2013], with initial conditions \( q(r, 0) \). We assume uniqueness and existence of a stationary distribution \( \rho(r) \) for the process Equation (3). The forward diffusion dynamics depend on the initial conditions \( R_0 \sim q(r, 0) \). We consider \( R_0 = Z \) to be the initial condition for the diffusion process, which is equivalent to \( q(r, 0) = q_\psi(r) \). Under loose conditions [Anderson, 1982], a time-reversed stochastic process exists, with a new SDE of the form: \[ dR_t = (-\alpha(T-t) R_t + g^2(T-t) \nabla \log(q(R_t, T-t))) dt + g(T-t) dW_t, \quad R_0 \sim q(r, T), \] indicating that, in principle, simulation of Equation (4) allows to generate samples from the desired distribution \( q(r, 0) \). In practice, we use a parametric score network \( s_\chi(r, t) \) to approximate the true score function, and we approximate \( q(r, T) \) with the stationary distribution \( \rho(r) \). Indeed, the generated data distribution \( q(r, 0) \) is close (in KL sense) to the true density as described by [Song et al., 2021a; Franzese et al., 2023]: \[ \text{KL}[q_\psi(r) || q(r, 0)] \leq \frac{1}{2} \int_0^T g^2(t) \mathbb{E}[\| s_\chi(R_t, t) - \nabla \log q(R_t, t) \|^2] dt + \text{KL}[q(r, T) || \rho(r)], \] Since the measures are not absolutely continuous w.r.t the Lebesgue measure, mutual information is \(+\infty\). This is not necessary for the validity of the method [Song et al., 2021a]. where the first term on the r.h.s is referred to as score-matching objective, and is the loss over which the score network is optimized, and the second is a vanishing term for \( T \to \infty \). To conclude, joint generation of all modalities is achieved through the simulation of the reverse-time SDE in Equation (4), followed by a simple decoding procedure. Indeed, optimally trained decoders (achieving zero in Equation (2)) can be used to transform \( Z \sim q_\psi(z) \) into samples from \( \int p_\theta(x \mid z)q_\psi(z)\mathrm{d}z = p_D(x) \). **Conditional generation.** Given a generic partition of all modalities into non overlapping sets \( A_1 \cup A_2 \), where \( A_2 = (\{1,\ldots,M\} \setminus A_1) \), conditional generation requires samples from the conditional distribution \( q_\psi(z^{A_1} \mid z^{A_2}) \), which are based on masked forward and backward diffusion processes. Given conditioning latent modalities \( z^{A_2} \), we consider a modified forward diffusion process with initial conditions \( R_0 = C(R_0^{A_1}, R_0^{A_2}) \), with \( R_0^{A_1} \sim q_\psi(r^{A_1} \mid z^{A_2}), R_0^{A_2} = z^{A_2} \). The composition operation \( C(\cdot) \) concatenates generated (\( R^{A_1} \)) and conditioning latents (\( z^{A_2} \)). As an illustration, consider \( A_1 = \{1,3,5\} \), such that \( X^{A_1} = \{X^1,X^3,X^5\} \), and \( A_2 = \{2,4,6\} \) such that \( X^{A_2} = \{X^2,X^4,X^6\} \). Then, \( R_0 = C(R_0^{A_1}, R_0^{A_2}) = C(R_0^{A_1}, z^{A_2}) = [R_0^1,z^2,R_0^3,z^4,R_0^5,z^6] \). More formally, we define the masked forward diffusion SDE: \[ \mathrm{d}R_t = m(A_1) \odot [\alpha(t)R_t\mathrm{d}t + g(t)\mathrm{d}W_t], \quad q(r,0) = q_\psi(r^{A_1} \mid z^{A_2})\delta(r^{A_2} - z^{A_2}). \] The mask \( m(A_1) \) contains \( M \) vectors \( u^i \), one per modality, and with the corresponding cardinality. If modality \( j \in A_1 \), then \( u^j = 1 \), otherwise \( u^j = 0 \). Then, the effect of masking is to “freeze” throughout the diffusion process the part of the random variable \( R_t \) corresponding to the conditioning latent modalities \( z^{A_2} \). We naturally associate to this modified forward process the conditional time varying density \( q(r,t \mid z^{A_2}) = q(r^{A_1},t \mid z^{A_2})\delta(r^{A_2} - z^{A_2}) \). To sample from \( q_\psi(z^{A_1} \mid z^{A_2}) \), we derive the reverse-time dynamics of Equation (6) as follows: \[ \mathrm{d}R_t = m(A_1) \odot [(-\alpha(T-t)R_t + g^2(T-t)\nabla \log(q(R_t,T-t \mid z^{A_2})))\mathrm{d}t + g(T-t)\mathrm{d}W_t], \] with initial conditions \( R_0 = C(R_0^{A_1}, z^{A_2}) \) and \( R_0^{A_1} \sim q(r^{A_1},T \mid z^{A_2}) \). Then, we approximate \( q(r^{A_1},T \mid z^{A_2}) \) by its corresponding steady state distribution \( \rho(r^{A_1}) \), and the true (conditional) score function \( \nabla \log(q(r,t \mid z^{A_2})) \) by a conditional score network \( s_\chi(r^{A_1},t \mid z^{A_2}) \). ### 4 Guidance Mechanisms to Learn the Conditional Score Network A correctly optimized score network \( s_\chi(r,t) \) allows, through simulation of Equation (4), to obtain samples from the joint distribution \( q_\psi(z) \). Similarly, a conditional score network \( s_\chi(r^{A_1},t \mid z^{A_2}) \) allows, through the simulation of Equation (7), to sample from \( q_\psi(z^{A_1} \mid z^{A_2}) \). In Section 4.1, we extend guidance mechanisms used in classical diffusion models to allow multi-modal conditional generation. A naïve alternative is to rely on the unconditional score network \( s_\chi(r,t) \) for the conditional generation task, by casting it as an in-painting objective. Intuitively, any missing modality could be recovered in the same way as a uni-modal diffusion model can recover masked information. In Section 4.2, we discuss the implicit assumptions underlying in-painting from an information theoretic perspective, and argue that, in the context of multi-modal data, such assumptions are difficult to satisfy. Our intuition is corroborated by ample empirical evidence, where our method consistently outperform alternatives. #### 4.1 Multi-time Diffusion We propose a modification to the classifier-free guidance technique (Ho & Salimans, 2022) to learn a score network that can generate conditional and unconditional samples from any subset of modalities. Instead of training a separate score network for each possible combination of conditional modalities, which is computationally infeasible, we use a single architecture that accepts all modalities as inputs and a multi-time vector \( \tau = [t_1,\ldots,t_M] \). The multi-time vector serves two purposes: it is both a conditioning signal and the time at which we observe the diffusion process. **Training:** learning the conditional score network relies on randomization. As discussed in Section 3.1, we consider an arbitrary partitioning of all modalities in two disjoint sets, \( A_1 \) and \( A_2 \). The set \( A_2 \) contains randomly selected conditioning modalities, while the remaining modalities belong to set $A_1$. Then, during training, the parametric score network estimates $\nabla \log(q(r, t | z^{A_2}))$, whereby the set $A_2$ is randomly chosen at every step. This is achieved by the masked diffusion process from Equation (6), which only diffuses modalities in $A_1$. More formally, the score network input is $R_t = C(R_t^{A_1}, Z^{A_2})$, along with a multi-time vector $\tau(A_1, t) = [1(1 \in A_1), \ldots, 1(M \in A_1)]$. As a follow-up of the example in Section 3.1 given $A_1 = \{1, 3, 5\}$, such that $X^{A_1} = \{X^1, X^3, X^5\}$, and $A_2 = \{2, 4, 6\}$ such that $X^{A_2} = \{X^2, X^4, X^6\}$, then, $\tau(A_1, t) = [t, 0, t, 0, t, 0]$. More precisely, the algorithm for the multi-time diffusion training (see A for the pseudo-code) proceeds as follows. At each step, a set of conditioning modalities $A_2$ is sampled from a predefined distribution $\nu$, where $\nu(\emptyset) \equiv \Pr(A_2 = \emptyset) = d$, and $\nu(U) \equiv \Pr(A_2 = U) = (1-d)/(2^M-1)$ with $U \in \mathcal{P}(\{1, \ldots, M\}) \setminus \emptyset$, where $\mathcal{P}(\{1, \ldots, M\})$ is the powerset of all modalities. The corresponding set $A_1$ and mask $m(A_1)$ are constructed, and a sample $X$ is drawn from the training data-set. The corresponding latent variables $Z^{A_1} = \{e_{\psi}(X^i)\}_{i \in A_1}$ and $Z^{A_2} = \{e_{\psi}(X^i)\}_{i \in A_2}$ are computed using the pre-trained encoders, and a diffusion process starting from $R_0 = C(Z^{A_1}, Z^{A_2})$ is simulated for a randomly chosen diffusion time $t$, using the conditional forward SDE with the mask $m(A_1)$. The score network is then fed the current state $R_t$ and multi-time vector $\tau(A_1, t)$, and the difference between the score network’s prediction and the true score is computed, applying the mask $m(A_1)$. The score network parameters are updated using stochastic gradient descent, and this process is repeated for a total of $L$ training steps. Clearly, when $A_2 = \emptyset$, training proceeds as for an un-masked diffusion process, since the mask $m(A_1)$ allows all latent variables to be diffused. **Conditional generation:** any valid numerical integration scheme for Equation (7) can be used for conditional sampling (see A for an implementation using the Euler-Maruyama integrator). First, conditioning modalities in the set $A_2$ are encoded into the corresponding latent variables $z^{A_2} = \{e_j(x^j)\}_{j \in A_2}$. Then, numerical integration is performed with step-size $\Delta t = T/N$, starting from the initial conditions $R_0 = C(R_0^{A_1}, z^{A_2})$, with $R_0^{A_1} \sim \rho(r^{A_1})$. At each integration step, the score network $s_X$ is fed the current state of the process and the multi-time vector $\tau(A_1, \cdot)$. Before updating the state, the masking is applied. Finally, the generated modalities are obtained thanks to the decoders as $\hat{X}^{A_1} = \{d_j(R_T^{A_1})\}_{j \in A_1}$. Inference time conditional generation is not randomized: conditioning modalities are the ones that are available, whereas the remaining are the ones we wish to generate. Any-to-any multi-modality has been recently studied through the composition of modality-specific diffusion models (Tang et al., 2023), by designing cross-attention and training procedures that allow arbitrary conditional generation. The work by Tang et al. (2023) relies on latent interpolation of input modalities, which is akin to mixture models, and uses it as conditioning signal for individual diffusion models. This is substantially different from the joint nature of the multi-modal latent diffusion we present in our work: instead of forcing entanglement through cross-attention between score networks, our model relies on joint diffusion process, whereby modalities naturally co-evolve according to the diffusion process. Another recent work (Wu et al., 2023) targets multi-modal conversational agents, whereby the strong, underlying assumption is to consider one modality, i.e., text, as a guide for the alignment and generation of other modalities. Even if conversational objectives are orthogonal to our work, techniques akin to instruction following for cross-generation, are an interesting illustration of the powerful capabilities of in-context learning of LLMs (Xie et al., 2022; Min et al., 2022). ### 4.2 IN-PAINING AND ITS IMPLICIT ASSUMPTIONS Under certain assumptions, given an unconditional score network $s_X(r, t)$ that approximates the true score $\nabla \log q(r, t)$, it is possible to obtain a conditional score network $s_X(r^{A_1}, t | z^{A_2})$, to approximate $\nabla \log q(r^{A_1}, t | z^{A_2})$. We start by observing the equality: $$q(r^{A_1}, t | z^{A_2}) = \int q(C(r^{A_1}, r^{A_2}), t | z^{A_2}) \, dr^{A_2} = \int \frac{q(z^{A_2} | C(r^{A_1}, r^{A_2}), t)}{q_\psi(z^{A_2})} q(C(r^{A_1}, r^{A_2}), t) \, dr^{A_2},$$ where, with a slight abuse of notation, we indicate with $q(z^{A_2} | C(r^{A_1}, r^{A_2}), t)$ the density associated to the event: the portion corresponding to $A_2$ of the latent variable $Z$ is equal to $z^{A_2}$ given that the whole diffused latent $R_t$ at time $t$, is equal to $C(r^{A_1}, r^{A_2})$. In the literature, the quantity $q(z^{A_2} | C(r^{A_1}, r^{A_2}), t)$ is typically approximated by dropping its dependency on $r^{A_1}$. This approxima- tion can be used to manipulate Equation (8) as \( q(r_{A_1}, t \mid z_{A_2}) \sim \int q(r_{A_2}, t \mid z_{A_2})q(r_{A_1}, t \mid r_{A_2}, t) \, dr \). Further Monte-Carlo approximations (Song et al., 2021b; Lugmayr et al., 2022) of the integral allow implementation of a practical scheme, where an approximate conditional score network is used to generate conditional samples. This approach, known in the literature as *in-painting*, provides high quality results in several *uni-modal* application domains (Song et al., 2021b; Lugmayr et al., 2022). The KL divergence between \( q(z_{A_2} \mid C(r_{A_1}, r_{A_2}), t) \) and \( q(z_{A_2} \mid r_{A_2}, t) \) quantifies, fixing \( r_{A_1}, r_{A_2} \), the discrepancy between the true and approximated conditional probabilities. Similarly, the expected KL divergence \( \Delta = \int q(r, t)KL[q(z_{A_2} \mid C(r_{A_1}, r_{A_2}), t) || q(z_{A_2} \mid r_{A_2}, t)] \, dr \), provides information about the average discrepancy. Simple manipulations allow to recast this as a discrepancy in terms of mutual information \( \Delta = I(Z_{A_2}; R_{t_{A_2}}) - I(Z_{A_2}; R_{t_{A_2}}) \). Information about \( Z_{A_2} \) is contained in \( R_{t_{A_2}} \), as the latter is the result of a diffusion with the former as initial conditions, corresponding to the Markov chain \( R_{t_{A_2}} \rightarrow Z_{A_2} \), and in \( R_{t_{A_1}} \) through the Markov chain \( Z_{A_2} \rightarrow Z_{A_1} \rightarrow R_{t_{A_1}} \). The positive quantity \( \Delta \) is close to zero whenever the rate of loss of information w.r.t initial conditions is similar for the two subsets \( A_1, A_2 \). In other terms, \( \Delta \approx 0 \) whenever out of the whole \( R_t \), the portion \( R_{t_{A_2}} \) is a sufficient statistic for \( Z_{A_2} \). The assumptions underlying the approximation are in general not valid in the case of multi-modal learning, where the robustness to stochastic perturbations of latent variables corresponding to the various modalities can vary greatly. Our claim are supported empirically by an ample analysis on real data in [B] where we show that multi-time diffusion approach consistently outperforms in-painting. ## 5 EXPERIMENTS We compare our method MLD to MVAE (Wu & Goodman, 2018), MMVAE (Shi et al., 2019), MOPOE (Sutter et al., 2021), Hierarchical Generative Model (NEXUS) (Vasco et al., 2022) and Multi-view Total Correlation Autoencoder (MVTCAE) (Hwang et al., 2021), MMVAE+ (Palumbo et al., 2023) re-implementing competitors in the same code base as our method, and selecting their best hyper-parameters (as indicated by the authors). For fair comparison, we use the same encoder/decoder architecture for all the models. For MLD, the score network is implemented using a simple stacked multilayer perceptron (MLP) with skip connections (see [A] for more details). **Evaluation metrics.** Coherence is measured as in Shi et al. (2019); Sutter et al. (2021); Palumbo et al. (2023), using pre-trained classifiers on the generated data and checking the consistency of their outputs. Generative quality is computed using Fréchet Inception Distance (FID) (Heusel et al., 2017) and Fréchet Audio Distance (FAD) (Kilgour et al., 2019) scores for images and audio respectively. Full details on the metrics are included in [C]. All results are averaged over 5 seeds (We report standard deviation in [E]). **Results.** Overall, MLD largely outperforms alternatives from the literature, both in terms of coherence and generative quality. VAE-based models suffer from a coherence-quality trade-off and modality collapse for highly heterogeneous data-sets. We proceed to show this on several standard benchmarks from the multi-modal VAE-based literature (see [C] for details on the data-sets). The first data-set we consider is MNIST-SVHN ([Shi et al., 2019]), where the two modalities differ in complexity. High variability, noise and ambiguity makes attaining good coherence for the SVHN modality a challenging task. Overall, MLD outperforms all VAE-based alternatives in terms of coherency, especially in terms of joint generation and conditional generation of MNIST given SVHN, see Table [I]. Mixture models (MMVAE, MOPOE) suffer from modality collapse (poor SVHN generation), whereas product of experts (MVAE, MVTCAE) generate better quality samples at the expense of SVHN to MNIST conditional coherence. Joint generation is poor for all VAE models. Interestingly, these models also fail at SVHN self-reconstruction which we discuss in [E]. MLD achieves the best performance also in terms of generation quality, as confirmed also by qualitative results (Figure [I]) showing for example how MLD conditionally generates multiple SVHN digits within one sample, given the input MNIST image, whereas other methods fail to do so. The Multi-modal Handwritten Digits data-set (MHD) (Vasco et al., 2022) contains gray-scale digit images, motion trajectory of the hand writing and sounds of the spoken digits. In our experiments, we do not use the label as a forth modality. While digit image and trajectory share a good amount of information, the sound modality contains a lot more of modality specific variation. Consequently, Table 1: Generation coherence and quality for MNIST-SVHN (M: MNIST, S: SVHN). The generation quality is measured in terms of Fréchet Modality Distance (FMD) for MNIST and FID for SVHN. | Models | Coherence (%) ↑ | Quality ↓ | |-----------------|-----------------|-----------| | | Joint | M → S | S → M | Joint(M) | Joint(S) | M → S | S → M | | MVAE | 38.19 | 48.21 | 28.57 | 13.34 | 68.9 | 68.0 | 13.66 | | MMVAE | 37.82 | 11.72 | 67.55 | 25.89 | 146.82 | 393.33 | 53.37 | | MOPOE | 39.93 | 12.27 | 68.82 | 20.11 | 129.2 | 373.73 | 43.34 | | NEXUS | 40.0 | 16.68 | 70.67 | 13.84 | 98.13 | 281.28 | 53.41 | | MVTCAE | 48.78 | 81.57 | 49.78 | 12.98 | 52.95 | 62.4 | 35.55 | | MMVAE+ | 47.75 | 13.23 | 29.69 | 36.96 | 121.77 | 240.90 | 38.11 | | MMVAE+(K=10) | 41.59 | 55.3 | 56.41 | 19.05 | 67.13 | 75.9 | 18.16 | | MLD (ours) | 85.22 | 83.79 | 79.13 | 3.93 | 56.36 | 57.2 | 3.67 | Figure 1: Qualitative results for MNIST-SVHN. For each model we report: MNIST to SVHN conditional generation in the left, SVHN to MNIST conditional generation in the right. Conditional generation involving the sound modality, along with joint generation, are challenging tasks. Coherency-wise (Table 2), MLD outperforms all the competitors where the biggest difference is seen in joint and sound to other modalities generation (in the latter task MVTCAE performs better than other competitors but is still worse than MLD). MLD dominates alternatives also in terms of generation quality (Table 3). This is true both for image, sound modalities, for which some VAE-based models suffer in producing high quality results, demonstrating the limitation of these methods in handling highly heterogeneous modalities. MLD, in the other hand, achieves high generation quality for all modalities, possibly due to the independent training of the autoencoders avoiding interference. Table 2: Generation Coherence (%) for MHD (Higher is better). Line above refer to the generated modality while the observed modalities subset are presented below. | Models | Joint | I (Image) | T (Trajectory) | S (Sound) | |-----------------|-------|-----------|----------------|-----------| | | | T | S | TS | I | S | LS | I | T | LT | | MVAE | 37.77 | 11.68 | 26.46 | 28.4 | 95.55 | 26.66 | 96.58 | 58.87 | 10.76 | 58.16 | | MMVAE | 34.78 | 99.7 | 69.69 | 84.74 | 99.3 | 85.46 | 92.39 | 49.95 | 50.14 | 50.17 | | MOPOE | 48.84 | 99.64 | 68.67 | 99.69 | 99.28 | 87.42 | 99.35 | 50.73 | 51.5 | 56.97 | | NEXUS | 26.56 | 99.57 | 85.77 | 95.27 | 88.51 | 93.22 | 70.06 | 75.84 | 89.48 | | | MVTCAE | 42.55 | 99.54 | 72.05 | 99.63 | 99.22 | 72.03 | 92.49 | 92.98 | 98.97 | 98.97 | | MMVAE+ | 41.67 | 98.05 | 84.16 | 91.88 | 97.47 | 81.16 | 89.31 | 64.34 | 65.42 | 64.88 | | MMVAE+(K=10) | 42.60 | 99.44 | 89.75 | 94.7 | 99.44 | 89.58 | 95.01 | 87.15 | 87.99 | 87.57 | | MLD (ours) | 98.34 | 99.45 | 88.91 | 99.88 | 99.58 | 88.92 | 99.91 | 97.63 | 97.7 | 98.01 | The POLYMNIST data-set (Sutter et al., 2021) consists of 5 modalities synthetically generated by using MNIST digits and varying the background images. The homogeneous nature of the modalities is expected to mitigate gradient conflict issues in VAE-based models, and consequently reduce modality collapse. However, MLD still outperforms all alternatives, as shown Figure 2. Concerning generation coherence, MLD achieves the best performance in all cases with the single exception of a single observed modality. On the qualitative performance side, not only MLD is superior to alternatives, but its results are stable when more modalities are considered, a capability that not all competitors share. Finally, we explore the Caltech Birds CUB (Shi et al., 2019) data-set, following the same experimentation protocol in Daunhauer et al. (2022) by using real bird images (instead of ResNet-features as in Shi et al. (2019)). Figure 3 presents qualitative results for caption to image conditional generation. MLD is the only model capable of generating bird images with convincing coherence. Clearly, none of the VAE-based methods is able to achieve sufficient caption to image conditional generation quality using the same simple autoencoder architecture. Note that an image autoencoder with larger capacity improves considerably MLD generative performance, suggesting that careful engineering applied to modality specific autoencoders is a promising avenue for future work. We report quantitative Table 3: Generation quality for MHD in terms of FMD for image and trajectory modalities and FAD for the sound modality (Lower is better). | Models | I (Image) | T (Trajectory) | S (Sound) | |-----------------|-----------|----------------|-----------| | | Joint | T | S | Joint | I | S | LS | Joint | I | T | LT | | MVAE | 93.73 | 92.55 | 14.68 | 39.51 | 20.42 | 38.77 | 19.25 | 14.14 | 14.08 | 14.47 | | MMVAE | 224.01 | 16.29 | 8.38 | 170.41 | 10.65 | 0.85 | 69.91 | 122.61 | 10.42 | 10.01 | | MOPOE | 147.81 | 16.29 | 8.38 | 15.89 | 13.92 | 0.52 | 33.38 | 0.53 | 18.53 | 24.11 | 23.93 | | NEXUS | 281.76 | 116.65 | 282.34 | 117.24 | 18.59 | 6.67 | 33.01 | 7.54 | 13.99 | 19.52 | 18.71 | 16.3 | | MVTCAE | 121.15 | 2.80 | 128.56 | 113.5 | 22.37 | 1.21 | 21.74 | 15.2 | 16.12 | 17.31 | 17.92 | 17.58 | | MMVAEs | 97.19 | 1.83 | 70.72 | 62.43 | 21.10 | 1.38 | 8.52 | 7.22 | 14.58 | 14.33 | 14.34 | 14.32 | | MMVAE+(K=10) | 85.98 | 1.83 | 70.72 | 62.43 | 21.10 | 1.38 | 8.52 | 7.22 | 14.58 | 14.33 | 14.34 | 14.32 | MLD | 7.98 | 1.7 | 4.54 | 1.84 | 3.18 | 0.83 | 2.07 | 0.6 | 2.39 | 2.31 | 2.33 | 2.29 | Figure 2: Results for POLYMNIST data-set. Left: a comparison of the generative coherence (%) ↑ and quality in terms of FID (↓) as a function of the number of inputs. We report the average performance following the leave-one-out strategy (see C). Right: are qualitative results for the joint generation of the 5 modalities. results in E where we show generation quality FID metric. Due to the unavailability of the labels in this data-set, coherence evaluation as with the previous data-sets is not possible. We then resort to CLIP-Score (CLIP-S) Hessel et al. (2021), an image-captioning metric, that, despite its limitations for the considered data-set Kim et al. (2022), shows that MLD outperforms competitors. 6 CONCLUSION AND LIMITATIONS We have presented a new multi-modal generative model, Multimodal Latent Diffusion (MLD), to address the well-known coherence–quality tradeoff that is inherent in existing multi-modal VAE-based models. MLD uses a set of independently trained, uni-modal, deterministic autoencoders. Generative properties of our model stem from a masked diffusion process that operates on latent variables. We also developed a new multi-time training method to learn the conditional score network for multi-modal diffusion. An extensive experimental campaign on various real-life data-sets, provided compelling evidence on the effectiveness of MLD for multi-modal generative modeling. In all scenarios, including cases with loosely correlated modalities and high-resolution datasets, MLD consistently outperformed the alternatives from the state-of-the-art. Figure 3: Qualitative results on CUB data-set. Caption used as condition to generate the bird images. MLD* denotes the version of our method using a powerful image autoencoder. REFERENCES Alexander Alemi, Ben Poole, Ian Fischer, Joshua Dillon, Rif A Saurous, and Kevin Murphy. Fixing a broken elbo. In *International conference on machine learning*, pp. 159–168. PMLR, 2018. Brian DO Anderson. Reverse-time diffusion equation models. *Stochastic Processes and their Applications*, 12(3):313–326, 1982. Luigi Antelmi, Nicholas Ayache, Philippe Robert, and Marco Lorenzi. Sparse multi-channel variational autoencoder for the joint analysis of heterogeneous data. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), *Proceedings of the 36th International Conference on Machine Learning*, volume 97 of *Proceedings of Machine Learning Research*, pp. 302–311. PMLR, 09–15 Jun 2019. URL https://proceedings.mlr.press/v97/antelmi19a.html. Shekoofeh Azizi, Simon Kornblith, Chitwan Saharia, Mohammad Norouzi, and David J. Fleet. Synthetic data from diffusion models improves imagenet classification, 2023. Fan Bao, Shen Nie, Kaiwen Xue, Chongxuan Li, Shi Pu, Yaole Wang, Gang Yue, Yue Cao, Hang Su, and Jun Zhu. One transformer fits all distributions in multi-modal diffusion at scale, 2023. Andreas Blattmann, Robin Rombach, Huan Ling, Tim Dockhorn, Seung Wook Kim, Sanja Fidler, and Karsten Kreis. Align your latents: High-resolution video synthesis with latent diffusion models, 2023. Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T. Freeman, Michael Rubinstein, Yuanzhen Li, and Dilip Krishnan. Muse: Text-to-image generation via masked generative transformers, 2023. Matthieu Da Silva–Filarder, Andrea Ancora, Maurizio Filippone, and Pietro Michiardi. Multimodal variational autoencoders for sensor fusion and cross generation. In *2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)*, pp. 1069–1076, 2021. doi: 10.1109/ICMLA52953.2021.00175. Imant Daunhawer, Thomas M. Sutter, Kieran Chin-Cheong, Emanuele Palumbo, and Julia E Vogt. On the limitations of multimodal VAEs. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum?id=w-CPUXXrA7. Adji B Dieng, Yoon Kim, Alexander M Rush, and David M Blei. Avoiding latent variable collapse with generative skip models. In *The 22nd International Conference on Artificial Intelligence and Statistics*, pp. 2397–2405. PMLR, 2019. Emilien Dupont, Hyunjik Kim, S. M. Ali Eslami, Danilo Jimenez Rezende, and Dan Rosenbaum. From data to functa: Your data point is a function and you can treat it like one. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), *Proceedings of the 39th International Conference on Machine Learning*, volume 162 of *Proceedings of Machine Learning Research*, pp. 5694–5725. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/dupont22a.html. Giulio Franzese, Simone Rossi, Lixuan Yang, Alessandro Finamore, Dario Rossi, Maurizio Filippone, and Pietro Michiardi. How much is enough? a study on diffusion times in score-based generative models. *Entropy*, 25(4), 2023. ISSN 1099-4300. doi: 10.3390/e25040633. URL https://www.mdpi.com/1099-4300/25/4/633. Ruifei He, Shuyang Sun, Xin Yu, Chuhui Xue, Wenqing Zhang, Philip Torr, Song Bai, and XI-AOJUAN QI. IS SYNTHETIC DATA FROM GENERATIVE MODELS READY FOR IMAGE RECOGNITION? In *The Eleventh International Conference on Learning Representations*, 2023. URL https://openreview.net/forum?id=nUmCcZ5RKF. Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning. *arXiv preprint arXiv:2104.08718*, 2021.
u3RJbzzBZj
The paper mentions that “The Placeholder technique shares similarities with the currently popular Masking technique in the unsupervised pretraining domain”, however, this claim requires further explanation to strengthen its validity.
null
m2NVG4Htxs
The pass rate is significantly lower for easy and medium problems, even for log(Github Presence) = 0. I understand that GitHub Presence is a proxy, but I would think that log(GitHub Presence) = 0 is our best guess for
TO THE CUTOFF... AND BEYOND? A LONGITUDINAL PERSPECTIVE ON LLM DATA CONTAMINATION Manley Roberts¹, Himanshu Thakur¹,², Christine Herlihy³, Colin White¹, Samuel Dooley¹ ¹Abacus.AI ²Carnegie Mellon University ³University of Maryland {manley,colin,samuel}@abacus.ai; hthakur@andrew.cmu.edu; cherlihy@umd.edu ABSTRACT Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks. Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are intentionally or unintentionally included in the training data. Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities. In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time. Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and we find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination. By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data. 1 INTRODUCTION Progress in machine learning has historically been driven by the use of benchmark datasets (Raji et al. 2021) to demonstrate and ultimately improve model performance. In recent years, as large language models (LLMs) have risen to prominence, these benchmarks are used to claim impressive capabilities across a wide range of tasks (Brown et al. 2020a), such as open-ended text and code generation. However, it has become increasingly clear that evaluating on these benchmarks jeopardizes our ability to accurately compare and assess modern models since static, open-source benchmarks are generally published on the internet, and most modern LLMs incorporate internet text in their training data. There are two main phenomena to be concerned with. The first is contamination, which refers to an LLM’s exposure, during training, to examples that are similar or identical to the examples that the model will later be evaluated on. The second is memorization, which can be understood as a property of a model that permits extraction of generated outputs that are exact or near-exact replicas of examples seen during training. Both phenomena can pose security and privacy risks (Carlini et al. 2021). Additionally, as we discuss below, they can upwardly bias model performance estimates, obfuscating our ability to compare models and attribute performance gains to true model improvements. Despite these concerns, contamination and memorization remain deceptively challenging to definitively measure and detect. While some researchers have used string-matching algorithms to compare test to training datasets (Radford et al. 2019 Brown et al. 2020b), many popular LLMs’ full training dataset details are not publicly available (OpenAI 2023a Rozière et al. 2023). Additionally, string-matching produces false negatives when slight variations exist in data between train and test (OpenAI 2023a). Even concerted efforts to prevent any model from training on a benchmark can fail. For example, the canary strings present in all BIG-Bench files (bench authors 2023), which are designed to be checked and excluded by model trainers, were not sufficient to keep BIG-bench out of GPT-4’s training corpus (OpenAI 2023a), partly because the success of this strategy relies on the awareness and compliance of model trainers in the absence of an enforcement mechanism. Recent works that look for contamination or memorization focus on popular benchmarks. They use controlled experimentation on models trained with certain subsets of chosen datasets, recognizing the value of comparing performance on examples that are seen vs. not seen during training (Magar & Schwartz, 2022; Zhang et al., 2021). In contrast, we take an experimental economics view and use a naturally occurring experiment—i.e., the training cut-off date—to assess contamination and memorization. We exploit the known training cutoff dates of GPT-4 and GPT-3.5-Turbo (OpenAI, 2023a,b) and assumed cutoff date of Code Bison (Google, 2023) to naturally partition benchmark examples into subsets that have either probably been seen (pre-cutoff) or have probably not been seen (post-cutoff). We focus our analysis on longitudinal benchmarks consisting of problems released over a period of time which bridges the cutoff. In particular, we analyze Codeforces and Project Euler, two longitudinal code generation/problem solving websites. These websites have steadily released problems since 2010 and 2001, respectively. Informal analyses have shown that there are large drops in success rates of GPT-4 when evaluated on older versus more recent problems from Codeforces (He, 2023; Cundy, 2023). We build upon these insights by conducting the first rigorous, large-scale, longitudinal analysis of contamination and memorization in code generation and problem-solving benchmarks. To the best of our knowledge, we are the first to exploit the longitudinal nature of the benchmarks we analyze, along with the known training cutoff dates of the open and closed sourced models, to naturally identify examples that the LLMs are likely/unlikely to have been exposed to during training, and use this partition to compare LLM performance during the pre- and post-cutoff periods. Our contributions In this work, we explore contamination and memorization through the lens of time. Our core contributions include: (i) The first large-scale, longitudinal analysis of contamination and memorization using a naturally occurring experiment—a novel methodology in LLM contamination which is important in light of closed-source models; (ii) Empirical findings demonstrating that GPT-4 was likely exposed to Codeforces and Project Euler, due to a statistically significant positive association we observe between a problem’s presence on GitHub and each LLM’s test case pass rate only for problems released before the GPT training cutoff; (iii) Code required to construct our longitudinal datasets and perform analyses, which we open-source. 2 RELATED WORK Evaluation of Code Generation Models Code generation models are generative models that try to produce valid code given an input of some representation of the programmatic behavior, mathematical function, and/or computational task that the user would like to obtain. Modern code generation models include general models such as GPT family (OpenAI, 2023a), Llama 2 (Rozière et al., 2023), or PaLM (Chowdhery et al., 2022), as well as a variety of task-specific code models: AlphaCode (Li et al., 2022), CodeGen (Nijkamp et al., 2022), Code-Llama (Rozière et al., 2023), PaLM-Coder (Chowdhery et al., 2022). Relevant code generation benchmarks include small sets of entirely handwritten problems (Chen et al., 2021; Nijkamp et al., 2022) as well as larger collections curated from internet sources such as code interview sites, competitive programming forums, or general open source code (Hendrycks et al., 2021; Austin et al., 2021; Zan et al., 2022; Huang et al., 2022), and some that include both original and online-sourced problems (Yin et al., 2022; Li et al., 2022). Code interview, practice, or competition sites, offering problem descriptions and programmatic evaluation, are common choices to assess modern LLM capabilities (Nguyen & Nadi, 2022; Zhang et al., 2023; He, 2023; Cundy, 2023)—and indeed some public benchmarks feature these problems (Hendrycks et al., 2021; Li et al., 2022). 1GPT-4 acknowledges training with some small amount of data beyond its cutoff (OpenAI, 2023a), so post-cutoff examples may still appear. GPT-3.5-Turbo, subject to similar reinforcement learning with human feedback (RLHF) as GPT-4 (OpenAI, 2023a), may have seen data beyond its cutoff as well. 2Our treatment of datasets and our evaluation framework are available at https://github.com/abacusa1/to-the-cutoff. We release code and dataset contents to the extent possible while respecting the licensing requirements of the dataset owners. To assess the validity of solutions, many of these benchmarks include test cases. They use a ‘functional correctness’ metric based on passing these cases as the primary way to measure code generation performance; evaluating with complexity/understandability metrics (Nguyen & Nadi [2022]) is less common. Kulal et al. [2019], Chen et al. [2021] employ the pass@k metric, describing the likelihood at least one among k sampled generations will pass all test cases. The benefit of these metrics is the complete independence from either expensive human feedback or inherently constraining similarity-to-ground-truth NLP metrics (Papineni et al. [2002], Lin [2004]), which are often ineffective for code (Tran et al. [2019]). These metrics are in contrast to other popular LLM performance metrics like perplexity (Kirchenbauer et al. [2023], Jain et al. [2023]) or information retrieval based LLM metrics of accuracy (Kwiatkowski et al. [2019], Pal et al. [2023]). Adversarial Filtering and Adaptive Benchmarks in NLP Test-time exploitation of knowledge gained via contamination or memorization can be seen as special cases of a more general phenomenon in which language models appear to exhibit sophisticated reasoning capabilities but are in fact exploiting shallower heuristics, with potentially negative consequences for generalizability (Bender et al. [2021]). Prior work has demonstrated that domain-agnostic and domain-specific crowd worker-constructed natural language inference (NLI) datasets—i.e., SNLI (Bowman et al. [2015]), MultiNLI (Williams et al. [2018]), MedNLI (Romanov & Shivade [2018])—contain spurious correlations between lexical and syntactic features of the inputs and the corresponding class labels, such that hypothesis-only baselines (i.e., without premise) are able to outperform majority-class baselines (Poliak et al. [2018], Gururangan et al. [2018], McCoy et al. [2019], Herlihy & Rudinger [2021]). Researchers have proposed a variety of detection and mitigation strategies, including (1) adversarial filtering, in which an ensemble of classifiers are used to iteratively partition a dataset into easy and hard subsets (Zellers et al. [2018]); (2) introduction of stochasticity to the annotator prompting process via randomly selected anchor words (Sakaguchi et al. [2020]); and (3) calls for the development of adversarially adaptive rather than static benchmarks (Zellers et al. [2019]). Memorization and Contamination in LLMs Many recent works have highlighted the security, privacy, and generalizability risks of memorization and contamination during LLM training and fine-tuning, while simultaneously proposing methods for detection and risk mitigation. Mireshghallah et al. [2022], Biderman et al. [2023], Carlini et al. [2023], Magar & Schwartz [2022] investigate the training dynamics of memorization/contamination, seeking scaling laws, early indications, and understanding of when and how memorization occurs in training. Carlini et al. [2021] famously extract hundreds of verbatim train examples from GPT-2. Ippolito et al. [2023] propose inference-time tricks to prevent regurgitation of examples, and Jacovi et al. [2023], Karmakar et al. [2022] give best practices to avoid benchmark contamination. Carlini et al. [2021, 2023], Lee et al. [2022], Kandpal et al. [2022], Magar & Schwartz [2022], Carlini et al. [2019] investigate the relationship between duplicated training data and memorization/contamination (in particular, Carlini et al. [2019] uses artificially introduced “canary” artifacts to track memorization). Nori et al. [2023] proposes a distance-based metrics to assess memorization. Several works (Magar & Schwartz [2022], Zhang et al. [2021]) evaluate the impact of memorization/contamination by estimating the difference in test-time performance on examples seen vs. not seen during training; we will use a variation of this strategy. Dodge et al. [2021] conduct a case study of the webcrawl corpus C4, including contamination investigation, while others (Aiyappa et al. [2023], Chang et al. [2023], Golchin & Surdeanu [2023]) conduct studies on the contamination of GPT models directly. Karmakar et al. [2022] dive deep into Hackerrank (an interview-prep coding platform) contamination in the Codex model by not only assessing pass rates on full problems but also on partial problem snippets. Golchin & Surdeanu [2023], a recent work, focuses in particular on comparing the result of prompting for memorized completion with or without benchmark clues and concludes that GPT-4 has been contaminated with several standard datasets; our analysis finds more contamination of GPT-4, but differs by examining longitudinal datasets in order to view the effects of dataset portions before and after training cutoffs. 3 Dataset Construction Many open-source benchmarks (Chen et al. [2021]) designed to evaluate code generation are released at a certain point in time, evaluated on a number of models along with release, and then deployed repeatedly as time goes on in order to evaluate new models’ performance on the benchmark. For a model with a strict temporal training dataset cutoff, these benchmarks exist either firmly within or outside of the training data, meaning that to evaluate the effect of the cutoff, we must compare between multiple datasets (which, clearly, might have many differences beyond their release dates). For this analysis, we concern ourselves with datasets with hand-written original problems that are released at intervals over a long stretch of time. In particular, we require that a substantial number of problems are produced before and after the GPT-4/GPT-3.5-Turbo cutoffs in September 2021, that the bulk of problems are of a format and size sufficient for prompting to modern LLMs, and that there exists an automated objective measure of correctness for evaluation. We focus on problems from the competitive programming website Codeforces (problems from 2010 - 2023) (Mirzayanov, 2023) and from the mathematical programming puzzle website Project Euler (problems from 2001-2023) (Hughes, 2023), building off analyses from Cundy (2023) and He (2023). **Codeforces** Codeforces is a website that hosts competitive programming competitions. Problems are released in small batches corresponding to a particular round, and competitors submit solutions against test cases, competing to produce the highest overall score by giving fast solutions. After a competition ends, each competitor’s solutions are available online, as well as the test cases that were evaluated on each problem (which take the form of an input file and expected output). For each problem, we collect metadata, problem text (processed to clear some HTML artifacts), and input/expected output text for public and private test cases. We forgo the compute-intensive procedure of generating additional test cases for problems which was used by Li et al. (2022) and omit test cases in which either the given input or output on the Codeforces platform end with “…”, as this often indicates that the text is too long and has been abridged. We provide additional details of the Codeforces problem set and our collection process in Appendix A.2. **Project Euler** Project Euler is a website that hosts difficult math problems with a string answer that is usually a single number (integral or real). The recommended way to solve these problems is to write code that will generate the answer. The answer itself can be submitted on the site and compared directly to the private solution (there are no official public solutions). There are no test cases except a comparison with the true answer. We collect Project Euler problems through a combination of their metadata API and direct scraping of problem pages. We collect problems through 845 (released May 2023) and use open-source solutions from Luckytoilet (2023). These solutions were collected in September 2023, but there are a few recent problems through 845 without a solution from this source; these we omit. ### 4 METHODOLOGICAL APPROACH The primary research questions we endeavor to explore through longitudinal analysis of pre- versus post-cutoff LLM performance include: 1. Does there exist a statistically significant relationship between a programming problem’s frequency of presence in open-source GitHub repositories and an LLM’s ability to generate a functionally correct solution to that problem, and/or reproduce portions of its metadata, such as the problem title or tags? 2. How or to what extent is this relationship mediated by a problem’s reported difficulty? 3. Most critically—how or to what extent do (1) and (2) change depending on whether a problem was released before versus after the LLM’s training date cutoff? **Models** To answer these questions, we conduct analysis on output produced by GPT-4, GPT-3.5-Turbo, Davinci-002, Google’s code-bison, and Meta’s Code-Llama. The specific models used are gpt-4-0314, gpt-3.5-turbo-0301, text-davinci-002, code-bison@001, and codellama/CodeLlama-34b-Instruct-hf. **Independent Variables** To begin, we define the following set of independent variables (IVs): *GitHub Presence* is a proxy metric intended to capture the frequency with which a problem is publicly available on GitHub (similar to public Google and Bing API search used by Chang et al. (2023) as a proxy for online presence of books). For simplicity, it searches only for mention of the problem’s name and ID. To compute GitHub Presence, we begin by collecting all public repositories that contain mentions of the benchmark dataset of interest (i.e., Codeforces or Project Euler) as of our collection date. Then, for each problem of interest in a given dataset, we filter the dataset repositories and retain the subset containing substring(s) that correspond to the problem’s title. We are then able to approximately compute the number of times a problem $p$ occurs as: $$\sum_{i=1}^{|{\text{dataset repos}}|} c(p, i) \quad \forall p \in \{{\text{dataset problems}}\},$$ where $c(p, i)$ is the number of matches within repo $i$’s concatenated text to any one of a number of format variations of $p$’s ID or title. Counting multiple occurrences within the same repo offers benefits such as a more granular analysis in the event of mega-repos that might store multiple solutions to the same problem, and it is therefore in our eyes a closer proxy to the true frequency of the problem in the training data. **Difficulty** intuitively captures how challenging a problem is for humans to solve. Both Codeforces and Project Euler report difficulty scores as part of problem metadata. **Problem released post-cutoff** is a Boolean variable to indicate whether a given problem was released (i.e., published by the dataset owners) before (0) or after (1) the training date cutoff for a given LLM. ### Dependent Variables We consider the following set of dependent variables (DVs): **Problem-level pass rate (pass rate)** We assume that in the general case, a given problem $p$ can be mapped to some number, $n_p \geq 1$ of test cases (either public or private). For code generation tasks, the question-level pass rate can then be computed as the fraction of test cases for which the code generated by the LLM produces a functionally correct solution—i.e., $$\frac{1}{n_p} \sum_{i=1}^{n_p} 1(\lambda(LLM(p)) = y_i),$$ where $\lambda$ represents calling the LLM’s generated code and $y_i$ represents the ground-truth output for problem $p$’s $i$th test case. The special case where we ask the LLM to generate (only) the solution rather than code can be represented by omitting the $\lambda$ call in the above expression. We use code generation on Codeforces and solution-only generation on Project Euler. See Appendix A.1 for a discussion of alternative metrics. **Title reproduction** In each of the datasets we consider, each problem has a title. To compute title reproduction for a given problem $p$, we provide as input the dataset name and problem ID, ask the LLM to generate the problem’s title given this input, and evaluate the similarity between the generated string, $t_p$, and $p$’s ground-truth title by mapping the title into a bag of tokens and modeling the retrieval of each token as a separate observation in logistic regression. We include this DV as a probe for possible memorization. **Tag reproduction** Among the datasets we consider, only Codeforces problems contain descriptive tags. Each tag is an n-gram that describes the intended approach or content of a problem, as well as metadata like the difficulty. For example, Problem 500A has tags “dfs and similar”, “graphs”, “implementation”, and “*1000”. For a given problem, $p$, we provide the problem’s title and ID as input to the LLM, and ask it to produce a set of candidate tags. We evaluate token-level recall with respect to the tokenized version of the problem’s ground-truth tag(s). Much like title reproduction, this DV is included as a memorization probe. To answer the aforementioned research questions for each dataset and dependent variable, we conduct regression analyses with problem-level performance as our unit of analysis, of the form: $$\text{DV} \sim (\text{Difficulty} + \text{GitHub Presence}) \cdot \text{postCutoff}$$ Because the problem-level pass rate prediction task involves count data, we specifically formalize it as a binomial regression, such that for a given problem, $p$ with a corresponding number of \{public + private\} test cases, $n_p$, we seek to predict the number of successes—i.e., the number of test cases, out of $n_p$ trials that the LLM’s generated code and/or numeric solution will pass. In title reproduction, the outcome of interest is binary—i.e., the LLM either does or does not successfully reproduce the problem’s title; as such, we model this task using logistic regression. For tag reproduction, while a problem’s tags can be set-valued, we tokenize the string of tags and evaluate the recall of each token independently; as such, this task is also modeled using logistic regression. A more detailed description of our modeling choices, along with interpretation guidance for the regression tables and marginal effects plots, can be found in Appendix B.1. In the regression tables, we report coefficients as odds ratios where values equal to 1 indicate no impact of the variable on the Pass Rate. Coefficients greater than 1 indicate a positive impact and those less than 1 indicate a negative impact. For example, an odds ratio coefficient of 1.352 would correspond to a 35.2% increase in the dependent variable associated with a unit increase in the independent variable. Figure 1: Marginal Effects of Pass Rate Metric for GPT-4 on the Codeforces Dataset. Observe a positive association between GitHub Presence before the cutoff but not after. Also, there is a negative association between Difficulty and pass rate both before and after the cutoff. 5 RESULTS Overall, we see strong trends that the performance of each model changes after the training cutoff. These changes often highlight that there is a positive association between the presence of questions on GitHub and the performance of the model; however, after the training cutoff, this association disappears. We provide examples of the generations of the LLMs in Appendix B.8 for a qualitative inspection of the results. We note that, while we did test the code generation performance of the open source models `text-davinci-002` and `codellama/CodeLlama-34b-Instruct-hf`, these models’ functional correctness performance was too low to yield meaningful analysis. Thus, we omit these models from all analyses in the main paper, but refer the reader to Appendix B.6. 5.1 PASS RATE GitHub Presence First, we look at the performance of Pass Rate on the benchmark Codeforces, where we report marginal effect plots for GPT-4 in Figure 1. GPT-3.5-Turbo and Code Bison are qualitatively similar and can be found in Appendix Figures I.2 and I.4. We report regression coefficients for all models on Codeforces in Figure 2. On the Project Euler benchmark, we report marginal effect plots in Appendix Figures II.8 and II.9 and regression coefficients in Figure II.5. Note that Project Euler is a much smaller benchmark with just 73 problems included after the GPT training cutoff date in September 2021. Additionally, none of the LLMs we tested got any of the questions correct for this set of 73 problems. We make several observations. Most strikingly, we see that the effect of the GitHub Presence variable is significant before the training cut-off and is not significant after the training cutoff. For GPT-4, we observe that for each increase in one unit of the log of GitHub Presence, we see the odds ratio increase by 4.5% on Codeforces and 47.8% on Project Euler; for GPT-3.5-Turbo, that value is moderated slightly to 2.5% on Codeforces and 27.7% on Project Euler; for Code Bison we see the odds ratio increase by 3.1%. However, we see no statistically significant association between GitHub Presence and GPT model performance for those problems which appeared online after the training cutoff in September 2021. This post-cutoff performance degradation provides evidence of contamination and/or memorization of pre-cutoff problems from Codeforces and Project Euler by GPT-3.5-Turbo and GPT-4. For the most part, the odds ratios are similar in terms of the direction and magnitude of their effects on the pass rate odds for each LLM. Two points of distinction include: (1) GPT-4 performs better across the board, as evidenced by higher odds of functional correctness for all difficulty levels in both the pre- and post-cutoff periods as compared to GPT-3.5-Turbo. (2) For Codeforces, the odds ratio for GitHub Figure 2: Regression coefficients for Pass Rate of GPT4, GPT-3.5-Turbo, and Code Bison on the Codeforces dataset. Observe that the odds ratios for both Difficulty and GitHub Presence are statistically significantly moderated between the before and after cutoffs for both models. See Table 1 and 2 for regression coefficients. presence is equal to 1 and is not statistically significant during the post-cutoff period for GPT-4, but is > 1 (i.e., associated with increased odds of Y) and statistically significant for $\alpha = 0.1$ during the same period for GPT-3.5-Turbo (see Table 1 and 2). While training details for GPT-family models are generally secret, we propose as a possible explanation that GPT-3.5-Turbo may have had higher train/finetune exposure to problems released after the cutoff date than GPT-4. It is worth pointing out that the relationships between Github Presence and pass rate have non-overlapping confidence intervals (before and after cutoff) for only GPT-4. This lends itself to our conclusion that memorization occurred for this model. Code Bison’s analysis uses a different cutoff (February 2023). On all of the data before this cutoff, there is a positive association between Github Presence and pass rate, while after there is no association. This analysis does produce a very large confidence band on the post-cutoff problems (due to the small sample size of Codeforces problems collected between Feb. 2023 and June 2023), making definitive conclusions difficult to resolve. Figure 3: Regression coefficients plots of Pass Rate for GPT-4 and GPT-3.5-Turbo on the Project Euler Dataset. See Table 10 and 11 for regression coefficients. No problems pass after the cutoff. --- 3 As mentioned in Section 1, GPT-4 is known to have some post-cutoff events included in its training; since GPT-3.5-Turbo uses a similar RLHF procedure (OpenAI, 2023a), it’s possible it has been exposed as well—to a publicly unknown extent. **Difficulty** When we examine results for GPT-4, GPT-3.5-Turbo, and Code Bison on Codeforces (see Tables 1, 2, 3), we see that there always exists a statistically significant, negative association between Difficulty and pass rate for each LLM—i.e., this relationship is observed in both the pre- and post-cutoff periods. However, while each model’s post-cutoff Difficulty coefficient is < 1, indicating a decrease in the odds of pass rate these coefficients are statistically significantly larger than their corresponding pre-cutoff values, suggesting a moderation in the still-negative relationship between Difficulty and pass rate. On the one hand, we can interpret the persistence of this relationship as evidence that the LLMs’ inductive biases, while perhaps influenced by the effects of contamination and memorization, are by no means solely determined by such artifacts. For this reason, we do not see equal (or perhaps, equally poor) performance across problem difficulty levels in the post-period, but instead see that LLM pass rates vary in accordance with difficulty even in the (hypothetical) absence of contamination, much as they do for human programmers. Other possible contributing factors include: (1) variation in the number of test cases by difficulty level, and/or over time; (2) more limited, but non-zero amounts of contamination or memorization of the datasets we analyze; and (3) the presence of unobserved confounder(s) influencing change in both problem difficulty and LLM pass rate over time. We test (1) by fitting a regression model to examine whether Difficulty is able to predict the number of observed test cases after the cutoff, but do not find Difficulty to have predictive power. Hypothesis (2) could be occurring, particularly given the acknowledged GPT fine-tuning (OpenAI, 2023a); however, it is unlikely this is happening at high enough levels to be a sufficient cause of observed behavior. We view the identification of possible confounders as a promising direction for future work. ### 5.2 Title and Tag Reproduction For title reproduction, we show regression tables in Appendix Tables 12, 15 and Appendix Figures 32–38. We conclude that across all models, there is no impact of GitHub Presence on the ability of the LLMs to reproduce the title, both before and after the training cutoffs. For tag reproduction, we find that there is a negative association between GitHub Presence and the ability of the LLMs to reproduce the tag labels on Codeforces (there are no tags associated with Project Euler). In Figure 42, Appendix Figure 40, and Appendix Tables 16 and 17, we can see that across the board, there is a negative association between Difficulty and tag reproduction performance before the cutoff but there is no association after the cutoff. As the regression results demonstrate, the negative association moderates after the cutoff, dropping from a decrease of 56.9% to 17.4% in odds ratios from before to after the cutoff for GPT-4 and from 50.3% to 26.1% for GPT-3.5-Turbo. The way in which Codeforces problems are available online is one hypothesis as to why tags reproduction is inversely related to GitHub presence, whereas title reproduction is not. Tags are metadata for Codeforces problems which are not present in the main problem description. As such, the tags may be less likely to be copied and pasted throughout the internet. Thus, it is possible that those tags themselves undergo some interesting distribution shift which could explain their inverse relationship with presence on GitHub. ### 5.3 Analysis Ablations **Public vs Private Test Cases** As discussed in Section 3, the Codeforces problems contain both public and private test cases. Public cases are readily available on the problem’s page whereas the private cases can only be found by opening an accepted submission on the Codeforces platform. Above, we analyzed the pass rate of each problem on all collected test cases. Now, we break these out by public and private test cases to investigate any different trends between the two sets. We consider only the private test cases in Figures 22, 24 and Tables 7, 8, whereas only the public test cases in Figures 16, 18 and Tables 4, 5. We see first that the two main trends we observed above hold, indicating the robustness of the conclusions: contamination is likely since GitHub Presence is positively correlated with pass rate only before the cutoff, and Difficulty has a negative association with pass rate. However, we observe non-overlapping confidence intervals only for GPT-4 private test cases. However, we also observe, unexpectedly, that the pass rate after the cutoff is higher for the private test cases than for the public test cases. This observation contrasts the typical perspective on Codeforces which considers the public test cases to be simpler toy cases used as examples while coding whereas the private cases are more thorough checks for correct behavior. To explain the discrepancy, we hypothesize that this behavior may be related to the private test cases after the cutoff being, on average, easier to answer than public test cases after the cutoff. There is no per-case difficulty score available on Codeforces, but we can consider a simple heuristic: shorter inputs are simpler to answer, and longer inputs are harder. Why might this effect be most noticeable after the cutoff? To answer, we observe that while the median test case input string lengths for our public and private pre-cutoff test cases are similar, at 18 and 21 characters, respectively, the median input lengths after the cutoff diverge for public and private test cases: 38 for public and 27 for private. Further investigation into the causes and consequences of this shift is a promising direction for future work. **Covariate Shift** We detail how we assess whether the performance degradation that we observe for problems released after the training cutoff might be caused by covariate shifts in the questions present in Codeforces and Project Euler. More precisely, we examine the distribution over tags and/or difficulty level, and we look for statistically significant changes in their prevalence during the post-cutoff period, relative to the pre-cutoff period. We visually inspect the distribution over tags (for Codeforces) and over discretized difficulty scores (for both datasets) for problems released during the pre- vs. post-periods, and do not find evidence of qualitative differences. We then conduct $\chi^2$ tests using the pre-cutoff normalized counts as the reference distribution. We do not find any statistically significant difference in any of the pre- versus post-distributions that we analyze. Plots and detailed statistical results are available in Appendix B.7. ## 6 DISCUSSION **Utility of longitudinal analysis:** We provide a novel methodology for examining data contamination in LLMs, borrowed from experimental economics where we observe phenomena by exploiting naturally occurring changes. Thus, we present a novel way to approximately validate claims made about training date cutoffs for black box LLMs, and/or exposure (or lack thereof) to a given dataset during training or fine-tuning, provided that the dataset in question contains instances on each side of the model’s reported cutoff. This can be valuable in cases where specific training details are not public, and/or when contamination or memorization is suspected as a root cause of performance degradation when the model in question is evaluated on newer problems. It is important to acknowledge that limitations also exist—for example, we cannot rule out the presence of latent confounder(s) influencing both exposure (i.e., to a given subset of problems) and LLM performance on those problems. **Implications for LLM evaluation:** Our findings in Section 5 illustrate the extent to which even high-quality, manually constructed benchmarks can be expected to enjoy ever-shorter shelf lives in the era of LLMs, as newer models with updated training cutoff dates will iteratively render existing benchmarks stale. Detection of memorization and contamination will likely remain challenging in the general case, as many popular benchmarks have been released all at once rather than over time, and as such, cannot be subjected to longitudinal analyses like the ones we perform. Additionally, in open-ended domains such as code generation, we may fail to detect instances where the model has been exposed to solution(s) for a given problem when the problem context itself is missing or latent (i.e., by training on public repositories where people may not reproduce the questions their code is intended to answer/solve). Current mitigation options, including the use of private (i.e., offline, closed-source) benchmarks, and/or benchmarks known to be constructed after the training cutoff for evaluation target(s) of interest, are likely to be time-bound in their utility and may be cost-prohibitive to sustain over longer time horizons, in the absence of exogenous shocks to the pace of LLM release cycles. Additionally, reliance on private benchmarks may further erode transparency and lead to duplication of efforts as challenging examples are detected and addressed in a more siloed fashion. Thus, while the need for some set of open-source “goalposts” against which to measure progress, evaluate, and compare LLMs is likely to persist, the way in which we construct, release, and evaluate against benchmark datasets will need to become more dynamic. We urge the community to move away from static benchmarks released in a single time step and toward continuous integration-style staggered release and evaluation cycles. REFERENCES Rachith Aiyappa, Jisun An, Haewoon Kwak, and Yong-Yeol Ahn. Can we trust the evaluation on chatgpt?, 2023. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton. Program Synthesis with Large Language Models, 2021. BIG bench authors. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models, 2023. Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, pp. 610–623, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450383097. doi: 10.1145/3442188.3445922. URL https://doi.org/10.1145/3442188.3445922 Stella Biderman, USVSN Sai Prashanth, Lintang Sutawika, Hailey Schoelkopf, Quentin Anthony, Shivanshu Purohit, and Edward Raff. Emergent and predictable memorization in large language models, 2023. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A large annotated corpus for learning natural language inference, 2015. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 1877–1901. Curran Associates, Inc., 2020a. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfc49674185fb8ac142f64a-Paper.pdf Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020b. Ethan Caballero, . OpenAI, and Ilya Sutskever. Description2Code Dataset, 8 2016. URL https://github.com/ethancaballero/description2code Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song. The secret sharer: Evaluating and testing unintended memorization in neural networks. In 28th USENIX Security Symposium (USENIX Security 19), pp. 267–284, 2019. Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, and Colin Raffel. Extracting training data from large language models, 2021. Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. Quantifying memorization across neural language models, 2023. Kent K. Chang, Mackenzie Cramer, Sandeep Soni, and David Bamman. Speak, memory: An archaeology of books known to chatgpt/gpt-4, 2023. Matt Chaput. Whoosh. https://whoosh.readthedocs.io/en/latest/ 2012. [Online; accessed 4-October-2023]. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan,
sAOtKKHh1i
In Table 1, are the antmaze results obtained by conditioning on the state or on images observations? I suspect this is the state. If I'm right, how did you adapt SSP and SFP, which are designed to work with images? Isn't this comparison unfair, since SSP and SFP, which are designed to work with images?
Subwords as Skills: Tokenization for Sparse-Reward Reinforcement Learning Anonymous authors Paper under double-blind review Figure 1: A sample of some “skills” that our method identifies for the (a) AntMaze and (b) Kitchen environments, where the transparency is higher (color is paler) for poses earlier in the trajectory. For more discussion see Appendix B. Abstract Exploration in sparse-reward reinforcement learning (RL) is difficult due to the need for long, coordinated sequences of actions in order to achieve any reward. Moreover, in continuous action spaces there are an infinite number of possible actions, which only increases the difficulty of exploration. One class of methods designed to address these issues forms temporally extended actions, often called skills, from interaction data collected in the same domain, and optimizes a policy on top of this new action space. Such methods require a lengthy pretraining phase in order to form the skills before reinforcement learning can begin. Given prior evidence that the full range of the continuous action space is not required in such tasks, we propose a novel approach to skill-generation with two components. First we discretize the action space through clustering, and second we leverage a tokenization technique borrowed from natural language processing to generate temporally extended actions. Using this as an action-space for RL outperforms comparable skill-based approaches in several challenging sparse-reward domains, and requires orders-of-magnitude less computation. 1 Introduction Reinforcement learning (RL), the learning paradigm that allows an agent to interact with an environment and collect its own data, is a promising approach to learning in many domains where high-quality data collection is too financially expensive or otherwise intractable. Though it began with dynamic programming in tabular settings, the recent use of neural networks as function approximators has led to great success on many challenging learning tasks (Mnih et al., 2013; Silver et al., 2017; Gu et al., 2017). These successful tasks tend to have some particular properties. In some cases, it is simple to define a reward function that yields reward at every step of interaction (the “dense” reward setting), like directional velocity of a robot learning to walk (Haarnoja et al., 2018a). In other cases, the environment dynamics are known, as in the case of Chess or Go (Silver et al., 2017). However, for many natural tasks like teaching a robot to make an omelet, it is much more straightforward to tell when the task is completed without knowing how to automatically supervise each individual step, how to model the environment dynamics. Learning in these “sparse” reward settings, where reward is only obtained extremely infrequently (e.g., at the end of successful episodes) is notoriously difficult. In order for a learning agent to improve its policy, the agent needs to explore its environment for long periods of time, often in a coordinated fashion, until it finds any reward. One class of solutions to this problem involves including additional task-agnostic dense rewards as bonuses that encourage agents to explore the state space (Pathak et al., 2017; Burda et al., 2018b). Another class of solutions to the exploration issue is to jumpstart the function approximator to be used in reinforcement learning by training it on some pretext task (Yarats et al., 2021; Liu and Abbeel, 2021), which works well when the training and downstream domains are well aligned. A third class of methods aims to create temporally extended actions, or “skills”, from interactions or data. A particular subclass of methods learns skills that are conditioned on the observations (Singh et al., 2020; Pertsch et al., 2021; Ajay et al., 2020; Sharma et al., 2019; Eysenbach et al., 2018; Park et al., 2022; 2023), which means that the deployment scenario needs to match the data. Others relax this assumption (Lynch et al., 2020; Pertsch et al., 2021; Bagatella et al., 2022) so that such skills can easily be transferred to some new domain as long as the action space remains the same. This has the potential to speed up exploration in new tasks for which it is not easy to collect data a priori (i.e., few-shot), which can lead to faster task adaptation. However, these recent efforts in skill learning all require lengthy pretraining phases due to their reliance on neural networks in order to learn the skills. Inspired by the recent cross-pollination of natural language processing (NLP) techniques in offline RL (Chen et al., 2021; Janner et al., 2021; Shafiullah et al., 2022), we take a different approach. Like the long-range coordination required for exploration in sparse-reward RL, language models must model long range dependencies between discrete tokens. Character inputs leads to extremely long sequences, and requires language models to both spell correctly and model inter-word relations. On the other hand, word-level input results in the model poorly capturing certain rare and unseen words. The solution is to create “subword” tokens somewhere in between individual characters and words that can express any text (Gage, 1994; Sennrich et al., 2015; Provilkov et al., 2020; Kudo, 2018; Schuster and Nakajima, 2012; He et al., 2020). In the spirit of this development in language modeling, we propose a tokenization method for learning skills. Following prior work (Dadashi et al., 2022; Shafiullah et al., 2022), we discretize the action space and use a modified byte-pair encoding (BPE) scheme (Gage, 1994; Sennrich et al., 2015) to obtain temporally extended actions. Then, we use this as the action-space for RL. As we demonstrate, such a method benefits from extremely fast skill-generation (minutes v.s. hours for neural network-based methods), significantly faster rollouts and training due to open-loop subword execution that does not require an additional neural network, interpretability of a finite set of skills, and strong results in several sparse-reward domains. 2 RELATED WORK Exploration in RL: Exploration is a fundamental problem in RL, particularly when reward is sparse. A common approach to encouraging exploratory behavior is to augment the (sparse) environment reward with a dense bonus term that biases toward exploration. This includes the use of state visitation counts (Poupart et al., 2006; Lopes et al., 2012; Bellemare et al., 2016) and state entropy objectives (Mohamed and Jimenez Rezende, 2015; Hazan et al., 2019; Lee et al., 2019; Pitis et al., 2020; Liu and Abbeel, 2021; Yarats et al., 2021) that incentivize the agent to reach “novel” states. Related, “curiosity”-based exploration bonuses encourage the agent to take actions in states where the effect is difficult to predict using a learned forward (Schmidhuber, 1991; Chentanez et al., 2004; Stadie et al., 2015; Pathak et al., 2017; Achiam and Sastry, 2017; Burda et al., 2018a) or inverse (Haber et al., 2018) dynamics model. Burda et al. (2018b) propose a random network distillation exploration bonus based upon the error in observation features predicted by a randomly initialized neural network. Temporally Extended Actions and Hierarchical RL: Another long line of work explores temporally extended actions due to the potential for such abstractions to improve learning efficiency. These advantages are particularly pronounced for difficult learning problems including sparse reward tasks, which is the focus of our work. In particular, action abstractions enable more effective exploration (Nachum et al., 2018) and simplify the credit assignment problem. Hierarchical reinforcement learning (HRL) (Dayan and Hinton, 1992; Kaelbling, 1993; Sutton, 1995; Boutilier et al., 1997; Parr and Russell, 1997; Parr, 1998; Sutton et al., 1999; Dietterich, 2000; Barto and Mahadevan, 2003; Kulkarni et al., 2016; Bacon et al., 2017; Vezhnevets et al., 2017) considers the problem of learning policies with successively higher levels of abstraction (typically two), whereby the lowest level considers actions directly applied in the environment while the higher levels reason over temporally extended transitions. A classic example of action abstractions is the options framework (Sutton et al., 1999), which provides a standardization of HRL in which an option is a terminating sub-policy that maps states (or observations) to low-level actions. Options are often either prescribed as predefined low-level controllers or learned via subgoals or explicit intermediate rewards (Dayan and Hinton, 1992; Dietterich, 2000; Sutton et al., 1999). Some simple instantiations of options include repeated actions (Sharma et al., 2017) and self-avoiding random walks (Amin et al., 2020). Konidaris and Barto (2009) learn a two-level hierarchy by incrementally chaining options (“skills”) backwards from the goal state to the start state. Nachum et al. (2018) propose a hierarchical learning algorithm (HIRO) that learns in an off-policy fashion and, in turn, is more sample-efficient than typical HRL algorithms, which learn on-policy. Achieving these sample efficiency gains requires addressing the instability typical of off-policy learning, which is complicated by the non-stationarity that comes with jointly learning low- and high-level policies. Levy et al. (2017) use different forms of hindsight (Andrychowicz et al., 2017) to address similar instability issues that arise when learning policies at multiple levels in parallel. Skill Learning from Demonstrations: In addition to the methods mentioned above in the context of HRL, there is an existing body of work that seeks to discover extended actions prior to their use in online RL, often called “skills”. Many methods have been developed for skill discovery from interaction (Daniel et al., 2012; Gregor et al., 2016; Eysenbach et al., 2018; Warde-Farley et al., 2018; Park et al., 2022; 2023). Most related to our setting is a line of work that explores extended action discovery from demonstration data (Lynch et al., 2020; Ajay et al., 2020; Singh et al., 2020; Pertsch et al., 2021; Bagatella et al., 2022). As an example, Lynch et al. (2020) learn a VAE on chunks of action sequences in order to generate a temporally extended action by sampling a single vector. Ajay et al. (2020) follow a similar approach, but use flow models on top of entire trajectories, and only rollout a partial trajectory at inference time. Some of these methods (Ajay et al., 2020; Singh et al., 2020; Pertsch et al., 2021) condition on the observations when learning skills, which leads to more efficient exploration, but such conditioning means that any skill that is learned will need to be deployed in the same environment as the one in which the data was collected, resulting in poor domain transfer performance (Bagatella et al., 2022). Others (Lynch et al., 2020; Bagatella et al., 2022) simply condition on actions, which means that the skills can be reused in any domain that shares the same action space. In an effort to learn more generalizable skills, we follow this latter example. There is also a related prior work that applies grammar-learning to online RL (Lange and Faisal, 2019), but such a method learns an ever-growing number of longer actions, which poses significant issues in the sparse-reward setting, as we discuss later. 3 METHOD Similar to prior work (Lynch et al., 2020; Ajay et al., 2020; Singh et al., 2020; Pertsch et al., 2021; Bagatella et al., 2022), we extract skills from demonstration data, more formally a dataset of $N$ trajectories with lengths $\{n_i\}_{i \in N}$ that involve the same action space as our downstream task $$D = \{(a_{ij}, o_{ij}) | i \in \mathbb{N} \cap [0, N), j \in \mathbb{N} \cap [0, n_i), a_{ij} \in \mathbb{R}^{d_{act}}, o_{ij} \in \mathbb{R}^{d_{obs}}\},$$ where $a_{ij}$ and $o_{ij}$ denote actions and observations, respectively. After extracting skills from this dataset, we use these skills as a new action space for reinforcement learning on some downstream task. Crucially, our skills are unconditional so do not have any information as to when they should be used in the downstream task. In following sections we detail our exact method. 3.1 Byte-Pair Encoding Byte-pair encoding (BPE) was first proposed as a simple method to compress files (Gage, 1994), but it has recently been used to construct vocabularies for NLP tasks in between the resolution of characters and whole-words (Sennrich et al., 2015). With character vocabularies, the vocabulary is small, but the sequence lengths are large. Such long sequences are extremely burdensome to process, especially for the current generation of Transformers. In addition, making predictions at the character level imposes a more difficult task on the language model: it needs to spell everything correctly, or make a long-coordinated set of predictions, not unlike the requirement on action sequences for sparse-reward exploration. Whole-word vocabularies shorten the sequence lengths and make the prediction task easier, but if a word is rare or, even worse, unseen in the training data, the outputs of the language model may not be correct in many cases. Subword vocabularies have emerged as a sweet-spot between these two extremes and are widely used in language models (Schuster and Nakajima, 2012; Sennrich et al., 2015; Kudo, 2018; Provilkov et al., 2020; He et al., 2020). Given a long sequence of tokens and an initial fixed vocabulary, BPE consists of two core operations: (i) compute the most frequent pair of neighboring tokens and add it to the vocabulary, and (ii) merge all instances of the pair in the sequence. These two steps of adding tokens and making merges alternate until a fixed maximum vocabulary size is reached. 3.2 DISCRETIZING THE ACTION SPACE In order to run BPE, it is necessary to have an initial vocabulary $V$ as well as a string of discrete tokens. In a continuous action space, one simple way to form tokens is through clustering. Prior work has leveraged these ideas in similar contexts (Janner et al., 2021; Shafiullah et al., 2022; Jiang et al., 2022) and we follow suit. For simplicity, we perform $k$-means clustering with the Euclidean metric on the actions of demonstrations in $D$ to form a vocabulary of $k$ discrete tokens $V = \{v_0, \ldots, v_k\}$. Our default choice for $k$ will be two times the number of degrees-of-freedom (DoF) of the original action space, or $2 \cdot d_{act}$. We will further study this choice in Appendix A.1. Such a clustering is the same as the action space of Shafiullah et al. (2022) without the residual correction. 3.3 SCORING MERGES In NLP, we often have access to a large amount of text data from (mostly) correct human authors. However, for robotics applications we may not have the same quantity of near-optimal (or even suboptimal) demonstrations. As a result, it may be undesirable to merge tokens based on frequency alone. Thus, in addition to merging based on frequency, we implement a variant of our method that merges based on a proxy for the distance traveled in the observation space in order to encourage the creation of skills that explore diversely in state space and thus are efficient for tasks. We take inspiration from LSD (Park et al., 2022) and CSD (Park et al., 2023) for this choice. At the high sampling rate of continuous control observations, the observation space should be locally Euclidean, so such a measure makes sense as long as the length of skills is short enough. We label the two variants of our method as SaS-freq and SaS-dist respectively (SaS for Subwords as Skills). More formally, suppose that two neighboring subwords $w_1$ and $w_2$ correspond to the trajectories $\tau_1 = \{(o_1, a_1), \ldots, (o_n, a_n)\}$ and $\tau_2 = \{(o_{n+1}, a_{n+1}), \ldots, (o_m, a_m)\}$. For an instance of the subword $w = \text{concat}(w_1, w_2)$ consisting of the entire trajectory $\tau = \text{concat}(\tau_1, \tau_2)$, we associate the vector $q_\tau = \frac{1}{m} \sum_{i=1}^{m} (o_i - o_1)$. This vector is analogous to the average “heading” of the subword, which ignores possible high-frequency, periodic motion like legs moving up and down. In order to obtain a vector that summarizes $w$, we compute the mean of such instances $q_w = \mathbb{E}_{(\tau_1, \tau_2) \in D} [q_\tau]$, which takes into account possible observation noise at different instances. Algorithm 1 Subword merging and pruning 1: Given dataset \( \mathcal{D} = \{(o_{ij}, a_{ij})_i | i \in \mathbb{N} \cap [0, N), j \in \mathbb{N} \cap [0, n_i), o_{ij} \in \mathbb{R}^{d_{obs}}, a_{ij} \in \mathbb{R}^{d_{act}}\} \) 2: Given \( k, N_{\text{max}}, N_{\text{min}}, \epsilon \ll 1 \) 3: Run \( k \)-means on actions with \( k \) clusters to get tokens \( \mathcal{V} = \{v_i\}_{i=1}^k \) 4: Tokenize \( \mathcal{D} \) according to \( \mathcal{V} \) 5: Initialize \( \mathcal{W} = \{w_i\}_{i=1}^k \leftarrow \mathcal{V}, \mathcal{Q} \leftarrow \emptyset, \bar{q} = 0, \Sigma_q = I \) 6: // Merge vocabulary 7: while \( |\mathcal{W}| < N_{\text{max}} \) do 8: \( \mathcal{W}' \leftarrow \{\text{All possible merges } w = \text{concat}(w_1, w_2) \text{ in } \mathcal{D} | w_1, w_2 \in \mathcal{W}\} \) // Get candidates 9: for \( w' \in \mathcal{W}' \) do 10: Compute \( q_{w'} = \mathbb{E}_{\text{instances of } w' \text{ in } \mathcal{D}} \left[ \frac{1}{L} \sum_{r=1}^L \text{length of } w' \cdot o_{tr} - o_{t1} \right] \) // Compute vectors 11: end for 12: \( w' = \arg \max_{w' \in \mathcal{W}'} (q_{w'} - \bar{q})^\top \Sigma_q^{-1} (q_{w'} - \bar{q}) \) // Find best possible merge 13: \( \mathcal{W} \leftarrow \mathcal{W} \cup \{w'\}, \mathcal{Q} \leftarrow \mathcal{Q} \cup \{q_{w'}\} \) // Add merge to vocabulary 14: \( \bar{q} \leftarrow \mathbb{E}_{q \in \mathcal{Q}}[q], \Sigma_q \leftarrow \text{Cov}_{q \in \mathcal{Q}}(q) + \epsilon I \) // Update vocabulary mean and covariance 15: end while 16: end while 17: // Prune vocabulary 18: while \( |\mathcal{W}| > N_{\text{min}} \) do 19: \( w' = \arg \min_{w' \in \mathcal{W} \setminus \{w'\}} (q_{w'} - \bar{q})^\top \Sigma_q^{-1} (q_{w'} - \bar{q}) \) // Find most redundant subword 20: \( \mathcal{W} \leftarrow \mathcal{W} \setminus \{w'\}, \mathcal{Q} \leftarrow \mathcal{Q} \setminus \{q_{w'}\} \) // Remove worst 21: \( \bar{q} \leftarrow \mathbb{E}_{q \in \mathcal{Q}}[q], \Sigma_q \leftarrow \text{Cov}_{q \in \mathcal{Q}}(q) + \epsilon I \) // Update vocabulary mean and covariance 22: end while 23: return \( \mathcal{W} \) Given an existing vocabulary of subwords \( \mathcal{W} = \{w_0, \ldots, w_{n-1}\} \) and their corresponding vectors \( \mathcal{Q} = \{q_0, \ldots, q_{n-1}\} \), we can compute the mean \( \bar{q} = \mathbb{E}_{q \in \mathcal{Q}}[q] \) and covariance matrix \( \Sigma_q = \text{Cov}_{q \in \mathcal{Q}}(q) + \epsilon I \) for some small \( \epsilon \). Now, we associate a score to each possible new subword according to the Mahalanobis distance between the candidate subword and the set of existing subwords: \( d_w = (q_w - \bar{q})^\top \Sigma_q^{-1} (q_w - \bar{q}) \). We add the subword with maximum distance \( d_w \) to our vocabulary. We update \( \Sigma_q \) and \( \bar{q} \) at every iteration. This results in a growing vocabulary of subwords that not only achieve high distance in observation space but are diverse. Such a scoring function also accounts for the fact that different parts of the observation space may have different natural scales. We merge up to a maximum vocabulary size \( |\mathcal{W}| = N_{\text{max}} \). The choice of \( N_{\text{max}} \) is further studied in Appendix A.2. 3.4 Pruning the Subwords If we stopped after merging to a maximum size, the final vocabulary would contain the intermediate subwords that make up the longest units. In the context of NLP, this redundancy may not be particularly detrimental. In reinforcement learning, however, redundancy in the action space of a new policy will result in similar actions competing for probability mass, making exploration and optimization more difficult. Thus we propose pruning the vocabulary. For frequency-based merging, we start with the longest subword, and remove subwords that are strictly contained in it, then move to the next longest and repeat the process. We do this until we reach the desired vocabulary size \( N_{\text{min}} \). For distance-based merging, we prune the set of subwords using the same metric as was used to merge. In particular, we find \( w' = \arg \min_w d_w \), update \( \mathcal{W} \leftarrow \mathcal{W} \setminus \{w'\} \), and recompute \( \Sigma_q \) and \( \bar{q} \). We continue pruning in this fashion until reaching a minimum vocabulary size \( |\mathcal{W}| = N_{\text{min}} \). Finally, \( \mathcal{W} \) becomes the action space for a new policy. Algorithm 1 provides the pseudocode for the distance-based method, and Figure 2 provides a graphical representation. We ablate the choice of \( N_{\text{min}} \) in Appendix A.3. Implicit in our method is an assumption that portions of the demonstrations can be recomposed to solve a new task, i.e., that there exists a policy that solves the new task with this new action space. Figure 3: All skills generated for `antmaze-medium-diverse` where the transparency is higher for poses earlier in the trajectory. See Appendix B for more details. One can imagine a counter-example where the subwords we obtain lack some critical action sequence without which the task cannot be solved. Still, we will show that this is a reasonable assumption for several sparse-reward tasks. 4 EXPERIMENTS In the following sections, we explore the empirical performance of our proposed method: first extracting skills from data, then using those skills as an action space for learning a new policy through sparse-reward RL. We see that there are significant speed and performance benefits, with strong exploration behavior. We also discuss benefits and drawbacks of our unconditional skills when compared to conditional skills like those of SPiRL (Pertsch et al., 2021). 4.1 REINFORCEMENT LEARNING WITH UNCONDITIONAL SKILLS Table 1: Main comparison (unnormalized scores). SSP corresponds to results from official code of Pertsch et al. (2021). We report numbers at the end of training for consistency. SFP takes so long it is unmanageable on many domains. AntMaze is scored 0–1, Kitchen is scored 0–4 in increments of 1, CoinRun is scored 0–100 in increments of 10. *CoinRun is a discrete-action domain, so instead of SAC only SAC-discrete can be used. SSP results exist for Kitchen, (0.8±0.2 (Pertsch et al., 2021, Figure 4)), but we are unable to reproduce this number using official code. | Task | SAC | SAC-discrete | SSP | SFP | SaS-freq | SaS-dist | |--------------------|-------|--------------|-------|-------|----------|----------| | antmaze-umaze-diverse | 0.0 | 0.0 | 0.0 | — | 0.0 | 0.76±0.43 | | antmaze-medium-diverse | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.40±0.55 | | antmaze-large-diverse | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.34±0.46 | | kitchen-mixed | 0.0 | 0.0 | 0.0* | 0.12±0.07 | 0.16±0.17 | 0.72±0.40 | | CoinRun | —* | 0.0 | 5.3±3.4 | 0.0 | 4.90±9.10 | 2.9±2.9 | **Tasks:** We consider AntMaze and Kitchen from D4RL (Fu et al., 2020), two challenging sparse-reward state-based tasks/datasets. AntMaze is a maze navigation task with a quadrupedal robot where the reward is 0 except for at the goal, and Kitchen is a manipulation task in a kitchen setting where reward is 0 except for on successful completion of a subtask. Demonstrations in AntMaze consist of random start and end states in the same maze collected by a suboptimal scripted policy, while demonstrations in Kitchen consist of different sequences of subtasks than the eventual goal in the same kitchen collected by humans in VR. We also consider CoinRun (Cobbe et al., 2019), a discrete-action platforming game. Unlike AntMaze and Kitchen, CoinRun is a visual domain and the demonstrations are collected by humans in distinct levels than the final task. All of these domains require many coordinated actions in sequence to achieve any reward, with horizons between 280 and 1000 steps. See Appendix E for more information on the data. **Baselines:** We consider SAC (Haarnoja et al., 2018b); SAC-discrete (Christodoulou, 2019) on top of our discretized $k$-means actions; Skill-Space Policy (SSP), a VAE trained on sequences of 10 actions at a time (Pertsch et al., 2021); and State-Free Priors (SFP) (Bagatella et al., 2022) a sequence model of actions that is used to inform action-selection during SAC inference, which takes the last action... as context. For SAC, SAC-discrete, SSP, and SFP, we implement or run the official code with the default hyperparameters listed in the respective papers. Complete results are available in Table 1. All numbers are taken from the end of training. We report mean and standard deviation across five seeds. As defaults we use $k = 2 \cdot d_{act}$ and $N_{\text{min}} = 16$. We pick $N_{\text{max}}$ per-domain such that skill lengths are comparable with SSP’s length 10. For more experimental details see Appendix E. Including our method, all skills are not conditioned on observations. We see in Table 1, that even in these challenging sparse-reward tasks, our method is the only one that is able to achieve nonzero reward across all tasks. All settings with zero reward fail to achieve any reward during training. The large standard deviations are due to the fact that some seeds fail to achieve any reward. Figure 3 visualizes 200-step rollouts of all of the discovered subwords for antmaze-medium-diverse. We provide mean and standard deviations for subword lengths in extracted vocabularies in Table 2. Failures of frequency-based merging in AntMazes are directly attributable to the discovery of long, constant sequences of actions, likely due to suboptimal demonstration trajectories that often jitter in place. Due to the simplicity of our method, it also enjoys significant acceleration compared to the baselines. In Table 3, we measure the wall-clock time required to generate skills, as well as inference for a single rollout. We see that our method achieves extremely significant speedups compared to prior work, achieving both faster and more efficient learning, as well as faster inference during execution. Our skill discovery is fast as we simply need to run $k$-means and tokenization. SSP and SFP require training larger generative models. In the case of rollouts our method predicts an entire sequence of actions using a simple policy every 10 steps or so, while SSP and SFP require much larger models in order to predict the latent variable, and then generate the next action from that latent. The speedup of our method also translates to faster RL (around 10 hours for our method vs. 12 hours for SSP and 1 week for SFP), which leads to faster iteration. ### 4.2 Exploration Behavior on AntMaze Medium The stringent evaluation procedure for sparse-reward RL equally penalizes poor learning and exploration. In order to shed light on the many zeros in Table 1, we examine the exploration behavior on AntMaze Medium. We choose this domain because it is particularly straightforward to interpret what good and bad exploration looks like: coverage of the maze. In Figure 4 and Figure 5 we plot state visitation for the first 1 million of 10 million steps of RL. We show the approximate start position in grey in the bottom left and the approximate goal location in green in the top right. Higher color intensity (saturation) corresponds to a higher probability of that state. Color is scaled nonlinearly according to a power law between 0 and 1 for illustration purposes. Thin white areas between the density and the walls can be attributed to the fact that we plot the center body position, and the legs have a nontrivial size limiting the proximity to the wall. In Figure 4, we show the exploration behavior across methods, averaged over 5 seeds. We see that the 0 values for the final reward in Table 1 for SAC, SSP, and SFP are likely due not to poor optimization, but rather poor exploration early in training, unlike our method. One reason for this could be due to the fact that our subwords are a discrete set, so policy exploration does not include small differences in a continuous space. In addition, SAC has fundamental issues in sparse-reward environments as the | Task | Subword length | |-----------------------|----------------| | antmaze-umaze-diverse | 11.3±5.6 | | antmaze-medium-diverse| 8.5±5.0 | | antmaze-large-diverse | 12.5±5.3 | | kitchen-mixed | 9.2±4.5 | | CoinRun | 9.1±5.6 | Table 2: Per-domain subword lengths. Numbers are intended to match the length-10 skills of SSP, but it is difficult to precisely control length due to the merging and pruning process. | Method | Skill Generation | Online Rollout | |------------|------------------|----------------| | SSP | 130000±1800 | 0.9±0.05 | | SFP | 8000±500 | 4.1±0.1 | | SaS-dist | **210±10** | **0.007±0.0006**| Table 3: Timing on antmaze-medium-diverse in seconds. Methods measured on the same Nvidia RTX 3090 GPU with 8 Intel Core i7-9700 3 GHz CPUs @ 3.00 GHz. SSP takes around 36 hours for skill generation and SFP takes around 2 hours. Figure 4: A visualization of state visitation for RL on antmaze-medium-diverse in the first 1 million timesteps for (a) SAC-discrete, (b) SFP, (c) SSP, and (d) our method. The grey circle in the bottom-left denotes the start position, while the green circle in the top-right indicates the goal. Notice that our method explores the maze much more extensively. SAC’s visitation is tightly concentrated on the start state, which is why there is so little red in (a). Figure 5: State visitation achieved with our method for each of the 5 individual seeds. Notice the diversity of exploration behavior. This is true even for seeds 0, 2 and 3 that, as reflected in the standard deviations in Table 1, eventually finish with a final reward of 0. signal to the Q function is driven entirely by the entropy bonus, which will lead to uniform weighting on every action and as a result Brownian motion in the action space. Such behavior is likely why the default setting for SAC (Haarnoja et al., 2018b) aggressively drives the policy to determinism, but in the sparse reward setting this also results in a uniform policy. Without long sequences of coordinated actions such exploration is insufficient. In Figure 5, we show the individual seed visitation of our method in the first 1 million steps. This is to demonstrate that, even though individual seeds may have some bias, they all are able to explore much more widely than the collective exploration of baseline methods. Indeed, this suggests that the large standard deviations of our method are a result of an optimization failure, as suggested by Zhou et al. (2022), and not poor exploration due to bad skill-encoding. 4.3 Comparison to Observation-Conditioned Skills Our method for extracting skills is an unconditional, open-loop method with the idea in mind that the skills should generalize. Still, this comes with the drawback that a policy will have to learn the right context to deploy skills from scratch. Alternatively, observation-conditioned skills bias policy exploration to match that of the demonstrations. This allows for more stable exploration, but worse generalization (Bagatella et al., 2022). Baselines: Here we compare to observation-conditioned extension of SSP, SPiRL and SPiRL-cl (the closed-loop version) (Pertsch et al., 2021; 2022) which bias a policy toward skills used in the exact context of demonstrations in the dataset. We also include OPAL (Ajay et al., 2020), a similar method to SPiRL, a flow model for entire trajectories of actions that is conditioned on observations. We take numbers from the paper as OPAL is closed-source. In Table 4, we see that SPiRL and SPiRL-cl show very strong performance on Kitchen, where the overlap between the dataset and the downstream task is exact, but SPiRL fails on AntMaze-large, while SPiRL-cl fails on CoinRun, likely due to differences between the dataset for CoinRun (easy levels) and the downstream task (hard levels). In addition we notice that BPE with simple frequency Table 4: Comparison to methods with observation-conditioned skills. In general we see conditioning helps when the data closely overlaps with the downstream task (Kitchen), but not in CoinRun where such an overlap cannot be assumed. With AntMaze the results are mixed likely due to the suboptimal quality of the demonstrations. We highlight that, even without conditioning, our method is competitive in AntMaze-large and comparable to SPiRL in AntMaze-medium. OPAL is a closed-source method similar to SPiRL, and results are from Ajay et al. (2020). | Task | SPiRL | SPiRL-cl | OPAL | SaS-freq | SaS-dist | |-----------------------|---------|----------|---------|----------|----------| | antmaze-medium-diverse| 0.40±0.49| 1.00±0.00| 0.82±0.04| 0.0 | 0.40±0.55| | antmaze-large-diverse | 0.0 | 0.20±0.40| 0.0 | 0.0 | 0.34±0.46| | kitchen-mixed | 1.87±0.16| 3.00±0.00| — | 0.16±0.17| 0.72±0.40| | CoinRun | 5.32±5.41| 0.0 | — | 4.90±9.10| 2.90±2.90| merging (SaS-freq) is poor in AntMaze as discussed previously but comparable in CoinRun. Note that we are able to replicate results for SPiRL-cl (2–3 in the original paper (Pertsch et al., 2022)), but for SPiRL our result is significantly worse (2–3 in the original paper (Pertsch et al., 2021)). It is unclear from where this discrepancy stems, but we use the official code, for which Kitchen is already implemented. In addition, we examine generalization behavior across observation-conditioned methods. Table 5 highlights the drawback that conditioning has in generalization. In particular the strongest advantage for conditional skills is in a setting where the data closely matches the final task, but it may be detrimental when we do not have access to sufficiently general demonstrations, like the ~10,000 trajectories in randomized environments that SPiRL uses for visual PointMaze (Pertsch et al., 2021). Table 5: Results on transferring skills extracted from antmaze-medium-diverse to downstream RL on antmaze-umaze-diverse. We see that methods with conditioning (SPiRL and SPiRL-cl) underperform our simple unconditional method. Similar conclusions were drawn by the authors of SFP (Bagatella et al., 2022, Figures 7, 16), where stronger conditioning fails to generalize. | Task | SSP | SPiRL | SPiRL-cl | SaS-dist | |-------------------------------|---------|---------|----------|----------| | antmaze-medium-diverse → antmaze-umaze-diverse | 0.0 | 0.60±0.49| 0.20±0.40| 0.97±0.12| 5 CONCLUSION Limitations: As proposed, there are a few key limitations to our method. Discretization removes resolution from the action space, which may be detrimental in settings like fast locomotion (Appendix H), but this may be fixed by more clusters or a residual correction (Shafiullah et al., 2022). In addition, like prior work execution of our subwords is open loop, so exploration can be inefficient (Amin et al., 2020) and unsafe (Park et al., 2021). Finally, in order to operate on the CoinRun domain, we downsample inputs from $64 \times 64$ resolution to $32 \times 32$ to make matrix inversion during merging less expensive (2 hours vs. 2 minutes). In high-dimensional visual input domains, our merging may be too computationally expensive to perform. However, this can be resolved by using neural network features instead of images. We also speculate that higher-quality demonstrations could allow us to generate skills simply by merging based on frequency (Table 1, CoinRun), and these demonstrations may be easy to obtain if they don’t need to be collected in the deployment domain (Table 5). Architectures from NLP have made their way into offline RL (Chen et al., 2021; Janner et al., 2021; Shafiullah et al., 2022), but as we have demonstrated, there is a trove of further techniques to explore. Given prior evidence, and the experiments in Appendix C, that discretization can be helpful in offline RL, we leveraged such discretization to form skills through a simple tokenization method. Such a method is much faster both in skill generation and in policy inference, and leads to strong performance in a relatively small sample budget on several challenging sparse-reward tasks. Moreover, the discrete nature of our skills lends itself to interpretation: one can simply look at the execution to figure out what has been extracted (Appendix B). Given its many advantages, we believe that such a tokenization method is the first step on a new road to efficient reinforcement learning. REFERENCES J. Achiam and S. Sastry. Surprise-based intrinsic motivation for deep reinforcement learning. *arXiv preprint arXiv:1703.01732*, 2017. A. Ajay, A. Kumar, P. Agrawal, S. Levine, and O. Nachum. OPAL: Offline primitive discovery for accelerating offline reinforcement learning. *arXiv preprint arXiv:2010.13611*, 2020. S. Amin, M. Gomrokchi, H. Aboutalebi, H. Satija, and D. Precup. Locally persistent exploration in continuous control tasks with sparse rewards. *arXiv preprint arXiv:2012.13658*, 2020. M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin, P. Abbeel, and W. Zaremba. Hindsight experience replay. *arXiv preprint arXiv:1707.01495*, 2017. P.-L. Bacon, J. Harb, and D. Precup. The option-critic architecture. In *Proceedings of the National Conference on Artificial Intelligence (AAAI)*, pages 1726–1734, 2017. M. Bagatella, S. Christen, and O. Hilliges. SFP: State-free priors for exploration in off-policy reinforcement learning. *Transactions on Machine Learning Research*, 2022. A. G. Barto and S. Mahadevan. Recent advances in hierarchical reinforcement learning. *Discrete event dynamic systems*, 13:41–77, 2003. M. Bellemare, S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton, and R. Munos. Unifying count-based exploration and intrinsic motivation. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2016. L. Biewald. Experiment tracking with weights and biases, 2020. URL https://www.wandb.com/. Software available from wandb.com. C. Boutilier, R. I. Brafman, and C. Geib. Prioritized goal decomposition of Markov decision processes: Toward a synthesis of classical and decision theoretic planning. In *Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI)*, pages 1156–1162, 1997. Y. Burda, H. Edwards, D. Pathak, A. Storkey, T. Darrell, and A. A. Efros. Large-scale study of curiosity-driven learning. *arXiv preprint arXiv:1808.04355*, 2018a. Y. Burda, H. Edwards, A. Storkey, and O. Klimov. Exploration by random network distillation. *arXiv preprint arXiv:1810.12894*, 2018b. L. Chen, K. Lu, A. Rajeswaran, K. Lee, A. Grover, M. Laskin, P. Abbeel, A. Srinivas, and I. Mordatch. Decision transformer: Reinforcement learning via sequence modeling. In *Advances in Neural Information Processing Systems (NeurIPS)*, pages 15084–15097, Dec. 2021. N. Chentanez, A. Barto, and S. Singh. Intrinsically motivated reinforcement learning. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2004. P. Christodouloou. Soft actor-critic for discrete action settings. *arXiv preprint arXiv:1910.07207*, 2019. K. Cobbe, O. Klimov, C. Hesse, T. Kim, and J. Schulman. Quantifying generalization in reinforcement learning. In *International Conference on Machine Learning*, pages 1282–1289, 2019. R. Dadashi, L. Hussenot, D. Vincent, S. Girgin, A. Raichuk, M. Geist, and O. Pietquin. Continuous control with action quantization from demonstrations. In *International Conference on Machine Learning*, pages 4537–4557, 2022. C. Daniel, G. Neumann, and J. Peters. Hierarchical relative entropy policy search. In *Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS)*, pages 273–281, 2012. P. Dayan and G. E. Hinton. Feudal reinforcement learning. In *Advances in Neural Information Processing Systems (NeurIPS)*, 1992.
yAcLwJu9qs
Humans are presented with one image at a time for 200 ms? Isn’t it too short to notice the image? Are the human participants in an average recognize objects in the image within that time? Would it be safe to assume that human participants do even better job when presented with an image upto 1s? It is mentioned that time was set to ensure fairness. Are the machines classify each image with the same 200 ms?
ASSESSING VISUALLY-CONTINUOUS CORRUPTION ROBUSTNESS OF NEURAL NETWORKS RELATIVE TO HUMAN PERFORMANCE Anonymous authors Paper under double-blind review ABSTRACT While Neural Networks (NNs) have surpassed human accuracy in image classification on ImageNet, they often lack robustness against image corruption, i.e., corruption robustness. Yet such robustness is seemingly effortless for human perception. In this paper, we propose visually-continuous corruption robustness (VCR) – an extension of corruption robustness to allow assessing it over the wide and continuous range of changes that correspond to the human perceptive quality (i.e., from the original image to the full distortion of all perceived visual information), along with two novel human-aware metrics for NN evaluation. To compare VCR of NNs with human perception, we conducted extensive experiments on 14 commonly used image corruptions with 7,718 human participants and state-of-the-art robust NN models with different training objectives (e.g., standard, adversarial, corruption robustness), different architectures (e.g., convolution NNs, vision transformers), and different amounts of training data augmentation. Our study showed that: 1) assessing robustness against continuous corruption can reveal insufficient robustness undetected by existing benchmarks; as a result, 2) the gap between NN and human robustness is larger than previously known; and finally, 3) some image corruptions have a similar impact on human perception, offering opportunities for more cost-effective robustness assessments. Our validation set with 14 image corruptions, human robustness data, and the evaluation code is provided as a toolbox and a benchmark. 1 INTRODUCTION For Neural Networks (NN), achieving robustness against possible corruption (i.e., corruption robustness) that can be encountered during deployment is essential for the application of NN models in safety-critical domains (Hendrycks & Dietterich, 2019). Since NN models in these domains automate tasks typically performed by humans, it is necessary to compare the model’s robustness with that of humans. Human versus NN robustness. Corruption robustness measures the average-case performance of an NN or humans on a set of image corruption functions (Hendrycks & Dietterich, 2019). Existing studies, including out-of-distribution anomalies (Hendrycks & Gimpel, 2017), benchmarking (Hendrycks & Dietterich, 2019; Hendrycks et al., 2021b), and comparison with humans (Hu et al., 2022; Geirhos et al., 2021), generally evaluate robustness against a pre-selected, fixed set of transformation parameter values that represent varying degrees of image corruption. However, parameter values cannot accurately represent the degree to which human perception is affected by image corruptions. For instance, using the same parameter to brighten an already bright image will make the objects harder to see but will have the opposite effect on a dark image (Hu et al., 2022). Additionally, humans can perceive and generalize across a wide and continuous spectrum of visual corruptions from subtle to completely distorted (Geirhos et al., 2019a; Sheikh & Bovik, 2006). Relying solely on preset parameter values for test sets could lead to incomplete coverage of the full range of visual corruptions, leading to potential biases in evaluation results, which cannot accurately represent NN robustness compared with humans. Contributions and Outlook. To address the above problem, we propose a new concept called visually-continuous corruption robustness (VCR). This concept focuses on the robustness of neural networks (NN) against a continuous range of image corruption levels. Additionally, we introduce two novel human-aware NN evaluation metrics (HMRI and MRSI) to assess NN robustness in comparison to human performance. We conducted extensive experiments with 7,718 human participants on the Mechanical Turk platform on 14 commonly used image transformations. Comparing NN and human VCR with our metrics, we found that a significant robustness gap between NNs and humans still exists: no model can fully match human performance throughout the entire continuous range in terms of both accuracy and prediction consistency, and few models can exceed humans by a small margin in specific levels of corruption. Furthermore, our experiments yield insightful findings about robustness of human and state-of-the-art (SoTA) NNs concerning accuracy, degrees of visual corruption, and consistency of classification, which can contribute towards the development of NNs that match or surpass human perception. We also discovered classes of corruption transformations for which humans showed similar robustness (e.g., different types of noise), while NNs reacted differently. Recognizing these classes can contribute to reducing the cost of measuring human robustness and elucidating the differences between humans and computational models. To foster future research, we open-sourced all human data as a comprehensive benchmark along with a Python code that enables test set generation, testing, and retraining. 2 METHODS: VCR, TESTING, METRICS, CROWDSOURCING, NN MODELS To study NN robustness against a wide and continuous spectrum of visual changes, we first define the VCR and then describe our method for generating test sets. To study VCR of NNs in relation to humans, we also present the human-aware metrics, followed by human robustness data and NN models used in the study. Visually-Continuous Corruption Robustness (VCR). A key difference between corruption robustness and VCR is that the latter is defined relative to the visual impact of image corruption on human perception, rather than the transformation parameter domain. To quantify visual corruption, VCR uses the Image Quality Assessment (IQA) metric Visual Information Fidelity (VIF) (Sheikh & Bovik, 2006; Kumar, 2020). VIF measures the perceived quality of a corrupted image \( x' \) compared to its original form \( x \) by measuring the visual information unaffected by the corruption. Thus, we define the change in the perceived quality caused by the corruption as \( \Delta_v(x, x') = \max(0, 1 - \text{VIF}(x, x')) \). See the appendix for more detail on \( \Delta_v \). With \( \Delta_v \), whose value ranges from 0 and 1, we can consider VCR against the wide, finite, and continuous spectrum of visual corruptions ranging from no degradation to visual quality (i.e., the original image) (\( \Delta_v = 0 \)) to the full distortion of all visual information (\( \Delta_v = 1 \)). Limitation: VCR is limited to image corruption that is applicable to the chosen IQA metric, thus by using VIF, VCR is limited to only pixel-level corruption. Further research is needed for metrics suitable for other types of corruption (e.g., geometric). For VCR, we consider a classifier NN \( f : X \rightarrow Y \) trained on samples of a distribution of input images \( P_X \), a ground-truth labeling function \( f^* \), and a parameterized image corruption function \( T_X \) with a parameter domain \( C \). We wish to consider the robustness of \( f \) against images with all degrees of visual corruption uniformly ranging from \( \Delta_v = 0 \) to \( \Delta_v = 1 \). Therefore, given a value \( v \in [0, 1] \), we define \( P(x, x'|v) \) as the joint distribution of original images \( (x) \) and corresponding corrupted images \( (x' = T_X(x, c), c \in C) \) with \( \Delta_v(x, x') = v \). VCR is defined in the presence of a robustness property \( \gamma \) that \( f \) should satisfy in the presence of \( T_X \): \[ R_{\gamma} = \mathbb{E}_{v \sim \text{Uniform}(0,1)}(P_{x,x'} \sim P(x,x'|v)(\gamma)). \] In this paper, we instantiate VCR with two existing robustness properties (see Fig. 7 in the appendix). The first one is accuracy (\( a \)), requiring that the prediction on corrupted images should be correct, i.e., \( f(x') = f^*(x) \). It is also used in the existing definition of corruption robustness (Hendrycks & Dietterich, 2019). Thus, \[ R_a = \mathbb{E}_{v \sim \text{Uniform}(0,1)}(P_{x,x'} \sim P(x,x'|v)(f(x') = f^*(x))). \] Note that distributions other than uniform can be used based on the application. For example, one may wish to favour robustness against heavy snow conditions for NNs deployed in arctic areas. The second property is prediction consistency (p), requiring consistent predictions before and after corruptions, i.e., \( f(x') = f(x) \) (Hu et al., 2022). It is applicable when ground truth is not available, which is common during deployment. Thus, \[ R_p = \mathbb{E}_{v \sim \text{Uniform}(0,1)}(P_{x,x' \sim P(x,x'|v)}(f(x') = f(x))). \] (3) **Testing VCR.** VCR of a subject (a human or an NN) is measured by first generating a test set through sampling and then estimating it using the sampled data. The test set is generated by sampling images and applying corruption to obtain \( P(x,x'|v) \) for different \( \Delta_v \) values \( v \). We sample \( x \sim PX \) and \( c \sim \text{Uniform}(C) \), and obtain \( x' = TX(x,c) \) and \( v = \Delta_v(x,x') \), resulting in samples \((x,x',c,v)\). Then, we divide them into groups of \((x,x',c)\), each with the same \( v \) value. Next, by dropping \( c \), we obtain groups of \((x,x')\) with the same \( v \), which are samples from \( P(x,x'|v) \). Note that this procedure requires only sufficient data in each group but not uniformity, i.e., \( v \sim \text{Uniform}(0,1) \) is not required. The varying size of each group, i.e., the non-uniformity of \( v \) distribution, will not distort VCR estimates, but only impact the estimate uncertainty at a given \( v \). Further, interpolation in the next step helps address any missing points (see Alg. 1 in the appendix). With the test set, we estimate the performance w.r.t. the property \( \gamma \) for each \( v \). For each \( v \) in the test data, we compute the rate of accurate predictions \( f(x') = f^*(x) \) to estimate accuracy, i.e., \( a_v = P_{x,x' \sim P(x,x'|v)}(f(x') = f^*(x)) \) [resp. consistent predictions \( f(x') = f(x) \) to estimate consistency, i.e., \( p_v = P_{x,x' \sim P(x,x'|v)}(f(x') = f(x)) \)]. Then by plotting \((v,a_v)\) and \((v,p_v)\) and applying monotonic smoothing splines (Koenker et al., 1994) to reduce randomness and outliers, we obtain smoothed spline curves \( s_a \) and \( s_p \), respectively. The curves \( s_\gamma \) (namely, \( s_a \) and \( s_p \)) describe how the performance w.r.t. the robustness property \( \gamma \) (namely, \( a \) and \( p \)) decreases as the visual corruption in images increases. Finally, we estimate \( R_a = \mathbb{E}_{v \sim \text{Uniform}(0,1)}(a_v) \) [resp. \( R_p = \mathbb{E}_{v \sim \text{Uniform}(0,1)}(p_v) \)] as the area under the spline curve, i.e., \( \tilde{R}_a = A_a = \int_0^1 s_a(v)dv \) [resp. \( \tilde{R}_p = A_p = \int_0^1 s_p(v)dv \)]. **Human-Aware Metrics.** A commonly used metric for measuring corruption robustness is the Corruption Error (CE) (Hendrycks & Dietterich, 2019)—the top-1 classification error rate on the corrupted images, normalized by the error rate of a baseline model. CE can be used to compare an NN with humans if the baseline model is set to be humans. However, CE is not able to determine whether an NN can exceed humans, and NN models could potentially have super-human accuracy for particular types of perturbations or in some \( \Delta_v \) ranges. Therefore, inspired by CE, we propose two new human-aware metrics, Human-Relative Model Robustness Index (HMRI) that measures NN VCR relative to human VCR; and Model Robustness Superiority Index (MRSI) that measures how much an NN exceeds human VCR. These metrics take both the estimated spline curve for humans, \( s_h^\gamma \), and for NN, \( s_m^\gamma \), as inputs, and we denote areas under these curves as \( A_h^\gamma \) and \( A_m^\gamma \), respectively (see Fig. 8). **Definition 1 [Human-Relative Model Robustness Index (HMRI)].** Given \( s_h^\gamma \) and \( s_m^\gamma \), let \[ A_h^\gamma > m = \int_0^1 (s_h^\gamma(v) - s_m^\gamma(v))^+ dv \] denote the average (accuracy or preservation) performance lead of humans over a model across the visual change range, where the performance lead is defined as the positive part of performance difference, i.e., \((s_h^\gamma(v) - s_m^\gamma(v))^+ = \max(0, s_h^\gamma(v) - s_m^\gamma(v))\). HMRI, which quantifies the extent to which a DNN can replicate human performance, is defined as \[ \frac{A_h^\gamma - A_h^\gamma > m}{A_h^\gamma} = 1 - \frac{A_h^\gamma > m}{A_h^\gamma}. \] The HMRI value ranges from \([0,1]\); a higher HMRI indicates a NN model closer to human VCR, and HMRI = 1 signifies that \( s_m^\gamma \) is the same as or completely above \( s_h^\gamma \) in the entire \( \Delta_v \) domain, meaning that the NN is at least as reliable as a human (see Fig. 8 in the appendix). **Definition 2 [Model Robustness Superiority Index (MRSI)].** Given \( s_h^\gamma \) and \( s_m^\gamma \), let \[ A_m^\gamma > h = \int_0^1 (s_m^\gamma(v) - s_h^\gamma(v))^+ dv \] denote the average performance lead of a model over a human across the visual change range. MRSI, which quantifies the extent to which a DNN model can surpass human performance, is defined as \[ \frac{A_m^\gamma > h}{A_m^\gamma}. \] The MRSI value ranges from \([0,1)\), with the higher value indicating better performance than humans. MRSI = 0 means that the given NN model performs worse than or equal to humans in the entire \( \Delta_v \) domain. A positive MRSI value indicates that the given NN model performs better than humans at least in some ranges of \( \Delta_v \) (see Fig. 8). Comparing humans and NNs with HMRI and MRSI yields three possible scenarios: (1) humans’ performance fully exceeds NN’s, i.e., \( 0 < \text{HMRI} < 1 \) and \( \text{MRSI} = 0 \); (2) NN’s performance fully exceeds humans’, i.e., $HMRI = 1$ and $MRSI > 0$; and (3) humans’ performance is better than NN’s in some $\Delta_v$ intervals and worse in others, i.e., $HMRI < 1$ and $MRSI > 0$. **Image Corruptions.** In this paper, we focus on studying VCR of NNs in relation to humans regarding 14 commonly used image corruptions from three different sources: Shot Noise, Impulse Noise, Gaussian Noise, Glass Blur, Gaussian Blur, Defocus Blur, Motion Blur, Brightness and Frost from IMAGENET-C (Hendrycks & Dietterich, 2019); Blur, Median Blur, Hue Saturation Value and Color Jitter from Albumentations (Buslaev et al., 2020); and Uniform Noise from Geirhos et al. (2019a). See the appendix for a visualization of these corruptions. **Crowdsourcing.** Given that VCR is focused on the average-case performance, we chose to use crowdsourcing for measuring human performance. This allowed us to involve a large number of participants for a more precise estimation of the average-case human performance. The experiment is designed following (Hu et al., 2022) and (Geirhos et al., 2019a). The experiment procedure is a forced-choice image categorization task: humans are presented with one image at a time, for 200 ms to limit the influence of recurrent processing, and asked to choose a correct category out of 16 entry-level class labels (Geirhos et al., 2019a). For NN models, the 1,000-class decision vector was mapped to the same 16 classes using the WordNet hierarchy (Geirhos et al., 2019a). The time to classify each image was set to ensure fairness in the comparison between humans and machines (Firestone, 2020). Between images, we showed a noise mask to minimize feedback influence in the brain (Geirhos et al., 2019a). We included qualification tests and sanity checks aimed to filter out cases of participants misunderstanding the task and spammers (Papadopoulos et al., 2017), and only considered results from those participants who passed both tests. As a result, we had 7,718 participants and obtained approximately (1) 70,000 human predictions on images with different levels of visual corruptions; and (2) 50,000 human predictions on original images as these can be repeated in experiments for different corruptions. The same original image, corrupted or not, was never shown again to the same participant. **NN models.** We studied 11 standard supervised models: NOISYMIX, NOISYMIX_NEW (Erichson et al., 2022), SIN, SIN_IN, SIN_IN_IN, HMANY, HAUGMIX (Hendrycks et al., 2020), STANDARD_R50 (Paszke et al., 2019), ALEXNET (Krizhevsky et al., 2012); 4 adversarial learning models: DO_50_2_LINF (Salman et al., 2020), LIU_SWIN-L, LIU_CONVNEXT-L (Liu et al., 2023), SINGH_CONVNEXT-L-CONVSTEM (Singh et al., 2023); 2 SWSL models: SWSL_RESNET18, SWSL_RESNEXT101_32X16D (Yalniz et al., 2019); 3 ViT models: TIAN_DEIT-S, TIAN_DEIT-B (Tian et al., 2022), DINO2_GIANT (Oquab et al., 2023); and 1 CLIP (clip-vit-base-patch32) model (Radford et al., 2021). For CLIP, we used a simple prompt “a picture of (ImageNet class)” while tokenizing the labels. See the appendix for more details on the models and their selection. ### 3 TESTING ROBUSTNESS AGAINST VISUAL CORRUPTION IMAGENET-C is the SoTA benchmark for corruption robustness. Rather than considering the continuous range of corruption like VCR, IMAGENET-C includes all IMAGENET validation images corrupted using 5 pre-selected parameter values for each type of corruption (Hendrycks & Dietterich, 2019). This section compares robustness measured with IMAGENET-C vs. VCR on all 9 IMAGENET-C corruption functions in our study. Due to the page limit, we include full results in the appendix. **Visual Corruption in Test Sets.** For each corruption, our tests generated for checking VCR contain 50,000 images, mirroring the size of the IMAGENET (Russakovsky et al., 2015) validation set, while IMAGENET-C includes $5 \times 50,000$ images. Because of the difference in how test sets are generated, we can observe two major differences in the distributions of degrees of visual corruption: they have different coverage and peak at different values (e.g., Fig. 1). To quantitatively assess the actual coverage of $\Delta_v$ in the test sets, Tab. 1 gives the coverage as a percentage of the full $\Delta_v$ range of $[0, 1]$. To compute it, the distribution is divided into 40 bins with the same width. A bin is considered covered if it contains 20 or more images. The coverage is then determined by dividing the number of covered bins by the total number of bins (40). We observed that IMAGENET-C exhibits a low coverage of $\Delta_v$ values. Specifically, as shown in Fig. 1 and Tab. 1, the distribution of IMAGENET-C in Gaussian blur has coverage of only 56.4% focusing mainly on the center of the entire domain of $\Delta_v$ and missing coverage for low and high $\Delta_v$ values, which can lead to biased evaluation. As we show in the appendix, the same can be observed for most Figure 1: Histograms showing $\Delta_v$ distribution between ImageNet-C and our VCR test sets for Gaussian Blur. Table 1: $\Delta_v$ Coverage Comparison with ImageNet-C. | Corruption | Coverage | |------------------|----------| | Brightness | 0.590 | | Gaussian Blur | 0.564 | | Defocus Blur | 0.534 | | Shot Noise | 0.462 | | Frost | 0.436 | | Gaussian Noise | 0.436 | | Impulse Noise | 0.385 | | Motion Blur | 0.333 | | Glass Blur | 0.333 | Figure 2: Comparison between ImageNet-C and VCR with Gaussian Noise. Models discussed in the text are marked by a red triangle. ImageNet-C corruption functions. On the other hand, our test sets provide coverage for almost the entire domain, with a coverage percentage of 97.4%. This pattern holds true for other corruption functions as well—our test sets have consistently higher coverage than ImageNet-C. As for VCR, Shot Noise and Impulse Noise have relatively low coverage, because the level of noise these functions add is exponential to their parameters. As a result, uniform sampling of the parameter range $C$ fails to cover small $\Delta_v$ values. When using uniform sampling over $C$, reaching the full coverage of $\Delta_v$ would require a large amount of data. Note, however, Alg. 1 still computes VCR over the full $\Delta_v$ range of $[0..1]$, and the lack of samples for low values of $\Delta_v$ has a limited impact on the VCR estimate. This is because we fit a monotonic spline that is anchored with a known initial performance for $\Delta_v = 0$, as discussed in the appendix. Remark: The reported accuracy of ImageNet-C can be directly impacted both by a lack of coverage and by non-uniformity, as it is computed as the average accuracy of all transformed images. In contrast, the shape of the $\Delta_v$ distribution in the generated test set does not impact VCR once sufficient coverage is achieved to estimate the spline curves $s_\gamma$, as already explained. Robustness Evaluation Results. Next, we compare robustness evaluation results obtained with ImageNet-C and VCR test sets. Consider results for Gaussian Noise in Fig. 2. NoisyMix and NoisyMix_new have almost the same robust accuracy on ImageNet-C, but NoisyMix_new has higher $\hat{R}_a$; similarly, SIN has higher ImageNet-C robust accuracy but lower $\hat{R}_a$ than SIN_IN_IN. This is due to the almost complete lack of coverage for $\Delta_v < 0.5$ for Gaussian Noise in ImageNet-C (see Tab. 1 and Fig. 11f), which can lead to biased evaluation results (i.e., biased towards $\Delta_v \geq 0.5$). Checking VCR allows us to detect such biases. In addition to accuracy, VCR can also be used to check whether the NN can preserve its predictions after corruption, i.e., the prediction consistency property $p$, which can give us additional information about NN robustness. From Fig. 2b,c we can see that the model Tian_Deit_B has a higher $\hat{R}_a$ than Singh_ConvNext-L_ConvStem but a lower $\hat{R}_p$. This suggests that even though Tian_Deit_B has better accuracy for corrupted images, it labels the same image with different labels before and after the corruption. Since ground truth would be hard to obtain during deployment, having low prediction consistency indicates issues with model stability and could raise concerns about when to trust the model prediction. The results for the remaining corruptions are in the appendix. Summary: It is essential to test robustness before deploying NNs into an environment with a wide and continuous range of visual corruptions. Our results confirmed that testing robustness in this range Figure 3: VCR evaluation results for Gaussian Noise. Results include, for each NN, the estimated curves $s_a$ and $s_p$ (representing how the performance w.r.t. the robustness properties $a$ and $p$ decreases as $\Delta_v$ increases); and the corresponding HMRI and MRSI values. Results are colored based on their category: Human, Vision Transformer, Supervised Learning, SWSL, Adversarial Training, CLIP. Figure 4: VCR evaluation results for Uniform Noise. using a fixed and pre-selected number of parameter values can lead to undetected robustness issues, which can be avoided by checking VCR. Additionally, accuracy cannot accurately represent model stability when facing corruptions, which can be addressed by testing $R_p$. 4 VCR OF DNNs COMPARED WITH HUMANS We use our two new human-aware metrics, HMRI and MRSI, and the data from the human experiment to compare VCR of the studied models against human performance as a baseline. For Gaussian Noise, Fig. 3 presents our measured HMRI and MRSI values for $R_a$ and $R_p$. For both metrics, a higher value indicates better robustness. As shown in Fig. 3a, no NN has reached 1.0 for HMRI$_a$, and in Fig. 3d, only 3 out of 21 NNs DINOV2.GIANT, TIAN_DEIT-B and SINGH-CONVNEXT-L-CONVSTEM reached 1.0 for HMRI$_p$, indicating that there are still unclosed gaps. between human and NN robustness, with humans giving more accurate and more consistent predictions facing corruptions than most SoTA NNs. These three top-performing models have also the highest HMRI values for both $R_a$ and $R_p$, making these models closest to human robustness. In Fig. 3b, we can see that these three models have MRSI values above 0.0, indicating that they surpass human accuracy in certain ranges of visual corruption. This can be visualized by checking the estimated curves $s_a$ as shown in Fig. 3c. The top-three models exceed human accuracy (the red curve) when $\Delta_v > 0.85$. For prediction consistency, Fig. 3e shows that all NNs have the MRSI value above 0.0 and this is because, as shown in Fig. 3f, all NN curves are above the human curve when the $\Delta_v$ value is small. Specifically, the top-three models completely exceed humans in the entire $\Delta_v$ range. Similarly, for Uniform Noise, as shown in Fig. 4a and Fig. 4d, no models reached 1.0 for HMRI$_a$ and the top-three models, reached 1.0 for HMRI$_p$. Together with Fig. 4b and Fig. 4e, we can see that for both $R_a$ and $R_p$, TIAN_DEiT-B has higher HMRI values but TIAN_DEiT-S has higher MRSI values. This suggests that while TIAN_DEiT-B is closer to human performance, TIAN_DEiT-S exceeds human performance more. This result may be counter-intuitive but can be explained with the curves $s_a$ and $s_p$ representing how the performance w.r.t. the robustness properties $a$ and $p$ decreases as $\Delta_v$ increases, as shown in Fig. 4c and Fig. 4f. From both $s_a$ and $s_p$, we observed that for $\Delta_v$ values less than 0.8, the performance of TIAN_DEiT-B is higher than TIAN_DEiT-S and closer to human, hence the higher HMRI value; and after $\Delta_v = 0.8$ when human performance starts decreasing, the TIAN_DEiT-B performance drops rapidly to much below that of TIAN_DEiT-S, hence the lower MRSI value. This suggests that both HMRI and MRSI are useful for comparing NN robustness, and our curves $s_a$ and $s_p$ can provide further information on NN robustness with different degrees of visual corruption. Overall, in both Fig. 3 and Fig. 4, we observed that the three ViT models (shown in purple) have the best performance for both $R_a$ and $R_p$, making them the models closest to human robustness. The same can also be observed for the rest of the corruption functions; see the appendix for more details. This indicates that vision transformer is the most promising architecture for reaching human-level robustness, even outperforming models trained with additional training data. The data in the appendix also indicates the biggest remaining robustness gap for blur corruptions. Furthermore, as we show in the appendix, our generated test sets can be used during model retraining for improved robustness compared to humans, resulting in with higher HMRI and MRSI values. Summary: As our results suggest, when considering the full range of visually-continuous corruption, no NNs can match human accuracy, especially for blur corruptions, and only the best-performing ones can match human prediction consistency. For some specific degrees of corruption, few NNs can exceed humans by mostly tiny margins. This highlights a more substantial gap between human and NN robustness than previously identified by Geirhos et al. (2021). By evaluating VCR using our human-centric metrics, we gain deeper insights into the robustness gap, which can aid in the development of models closer to human robustness. 5 VCR FOR VISUALLY SIMILAR CORRUPTION FUNCTIONS One noteworthy observation we made from our experiments with humans is the existence of visually similar corruption functions. This can contribute towards reducing experiment costs and a better understanding of differences between humans and NNs. Different corruptions change different aspects of the images, e.g., image colour, contrast, and the amount of additive visual noise, and thus affect human perception differently (Geirhos et al., 2019a). Also, multiple different corruption functions can be implemented for the same visual effect, such as Gaussian noise and Impulse noise for noise addition. Although the difference between Gaussian noise and Impulse noise can be picked up by complex NN models, an average human would struggle to distinguish between the two. Therefore, for a specific visual effect, there should exist a class of corruption functions implementing the effect that an average human is unable to tell them apart. We call corruption functions in the same class visually similar. We postulate that since visually similar functions, by definition, affect human perception similarly, they would affect human robustness similarly as well. Therefore, human data for one function can be reused for other similar functions in the same class possibly reducing experiment costs. Figure 5: Comparing human performance spline curves $s^h_a$ for similar and dissimilar corruption functions. For each curve, the coloured region around the curve is the 83% confidence interval used for comparison of similarity. See $s^h_p$ in supplementary materials. Since VCR is estimated with the spline curves $s^h_a$ and $s^h_p$, if the difference among the curves of a set of functions is statistically insignificant, human data (i.e., the spline curves) can be reused among the functions in this set. In Fig. 5, we plot the smoothed spline curves $s^h_a$ and $s^h_p$ obtained for all 14 corruption functions included in our experiments. We can observe that, for all corruption functions shown, human performance decreases slowly for small values of visual degrade ($\Delta_v$), but once $\Delta_v$ reaches a turning point, human performance starts decreasing more rapidly. Then, we observe that spline curves obtained for certain blur and noise transformations have similar shapes, while those for dissimilar transformations start decreasing at different turning points with different slopes. More specifically, the differences between two spline curves are statistically insignificant if their 83% confidence intervals overlap (Koenker et al., 1994). Summary: By checking statistical significance with 83% confidence interval for each corruption function, we empirically observed two classes of visually similar corruptions in our experiments with humans: (1) the noise class: Shot Noise, Impulse Noise, Gaussian Noise, and Uniform Noise; and (2) the blur class: Blur, Median Blur, Gaussian Blur, Glass blur, Defocus Blur. The remainder of the 14 corruptions we considered are dissimilar. See Fig. 5. NN Robustness for Visually Similar Corruption Functions. Because of the central difference between humans and NNs, e.g., computational powers, it is intuitive that NNs might react completely differently to corruptions visually similar to humans, and using VCR, we can empirically analyze such difference. For example, during deployment, noise with unknown distributions (ranging from Uniform, Gaussian, Poisson etc.), can be encountered. While noise distribution does not affect humans as we showed in Fig. 5, NNs which are particularly susceptible to a certain distribution might raise safety concerns. For example, two visually similar transformations Gaussian Noise and Uniform Noise add an additional noise to the images with the Gaussian and the Uniform distribution, respectively. However, our results in Fig. 3 and Fig. 4 suggest that the distribution difference is picked up by NNs. We can observe that most models have higher HMRI and MRSI values for Uniform Noise than Gaussian Noise. For small amounts of corruption ($\Delta_v < 0.8$), the difference between the estimated $s_a$ and $s_p$ curves for both corruptions is not statistically significant, i.e., NN models perform similarly when facing small amounts of Uniform and Gaussian Noise. For $\Delta_v$ values between [0.8..1.0], most visual information required for humans to recognize objects is corrupted by the noise, human performance decreases quickly, but the most robust models, e.g., DINOV2_GIANT and TIAN_DEiT-S, are able to pick up more information than humans and make reasonable recognition. When the added noise is from a uniform distribution, NN models perform better than when it is from a Gaussian distribution. Therefore, studying VCR also allows us to empirically analyze how changing the noise distribution, which would not affect humans, affects NN performance for different degrees of corruption. In the case of unknown or shifting distributions, such analysis would require human data for all distributions which is impractical and expensive. Identifying classes of visually similar corruption functions and reusing human data would significantly reduce the experiment costs. Identifying Visually Similar Transformations. We provide a naive method for identifying classes of visually similar corruptions. To identify whether two corruptions are similar enough to reuse human data, the goal is to determine whether the difference between them is distinguishable to a human. This can be done through a set of relatively inexpensive experiments. Without knowing the specific corruptions introduced to the images, participants are shown corrupted images and asked if the presented images are corrupted with the same corruption function. Presented images can be corrupted with the same or different corruption functions. Then, by repeating the experiments with different sampled images, the accuracy of distinguishing the corruptions can be calculated. We hypothesize that if the corruption functions are indistinguishable, human accuracy should be close to random. Then, since each experiment is either successfully distinguished or not, we use a binomial test to check whether the accuracy is statistically significant to not be random. Visually similar transformations included in this paper can be detected with this naive method. Our experiments showed that for each pair of transformations, results with statistical significance can be reached in less than a minute. Compared to the full set of experiments with 2,000 images and five different participants for each experiment, identifying similar transformations significantly decreased the experiment time, from approximately 5.55 hours to 5 minutes. Limitation: Note that the results of this method can be highly dependent on the opinion of the participants; thus, it is more optimal to select participants with a normal eyesight and a basic knowledge of image corruptions. We acknowledge that this naive method cannot give the most accurate identification of visually similar transformations. For example, it is reasonable to assume that two transformations can have very different visual effects but still affect human robustness in the same way, and this case would not be detected with this method. Nevertheless, we hope that our findings will promote future investigations of how NNs and humans react differently to corruptions. 6 RELATED WORK We briefly review related work on the comparison of human and NN robustness; a more extensive review of related work on robustness can be found in the appendix. Prior studies have used human performance to study the existing differences between humans and neural networks (Firestone, 2020; Zhang et al., 2018c), to study invariant transformations (Kheradpisheh et al., 2016), to compare recognition accuracy (Ho-Phuoc, 2018; Stallkamp et al., 2012), to compare robustness against image transformations (Geirhos et al., 2019a; 2021), or to specify expected model behaviour (Hu et al., 2022). The main difference between our study and existing work, specifically, the most recent study by Geirhos et al. (2021), is three-fold: 1) we are the first to quantify robustness across the full continuous visual corruption range, thus revealing previous undetected robustness gap; 2) our experiments for obtaining human performance are designed to include more participants for measuring the average human robustness, resulting in more generalizable results and reduced influence of outliers; 3) we identified visually similar transformations for humans but not NNs, potentially reducing experiment costs. 7 CONCLUSION In this paper, we revisit corruption robustness to consider it in relation to the wide and continuous range of corruptions to human perceptive quality, defining visually-continuous corruption robustness (VCR): along with two novel human-aware metrics for NN evaluation. Our results showed that the robustness gap between human and NNs is bigger than previously detected, especially for blur corruptions. We found that using the full and continuous range of visual change is necessary when estimating robustness, as insufficient coverage can lead to biased results. We also discovered classes of image corruptions that affect human perception similarly and identifying them can help reduce the cost of measuring human robustness and assessing disparities between human perception and computational models. In our study, we only considered the comparison of object recognition between humans and NNs; however, human and machine vision can be compared in many different ways, e.g., against neural data (Yamins et al., 2014; Kubilius et al., 2019), contrasting Gestalt effects (Kim et al., 2019), object similarity judgments (Hebart et al., 2020), or mid-level properties (Storrs et al., 2021). Nevertheless, our results give indicators for future robustness studies, and to promote further research, we provide our benchmark datasets with human performance data and our code as open source. REFERENCES Alexander Buslaev, Vladimir I. Iglovikov, Eugene Khvedchenya, Alex Parinov, Mikhail Druzhinin, and Alexandr A. Kalinin. Albumentations: Fast and Flexible Image Augmentations. *Information*, 11(2), 2020. ISSN 2078-2489. doi: 10.3390/info11020125. Licensed with MIT License. To view a copy of this license see https://github.com/albumentations-team/albumentations/blob/master/LICENSE. Prithvijit Chattopadhyay, Judy Hoffman, Roozbeh Mottaghi, and Aniruddha Kembhavi. RobustNav: Towards Benchmarking Robustness in Embodied Navigation. In *2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021*, pp. 15671–15680. IEEE, 2021. doi: 10.1109/ICCV48922.2021.01540. Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo DeBenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. RobustBench: A Standardized Adversarial Robustness Benchmark. In *Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual*, 2021. URL https://robustbench.github.io/. Licensed with MIT license. To view a copy of this license see https://github.com/RobustBench/robustbench/blob/master/LICENSE. Keyan Ding, Kede Ma, Shiqi Wang, and Eero P. Simoncelli. Image quality assessment: Unifying structure and texture similarity. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 44(5):2567–2581, 2022. doi: 10.1109/TPAMI.2020.3045810. N. Benjamin Erichson, Soon Hoe Lim, Winnie Xu, Francisco Utrera, Ziang Cao, and Michael W. Mahoney. NoisyMix: Boosting Model Robustness to Common Corruptions, 2022. Chaz Firestone. Performance vs. Competence in Human–Machine Comparisons. *Proceedings of the National Academy of Sciences*, 117(43):26562–26571, 2020. Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision Meets Robotics: The KITTI Dataset. *Int. J. of Robotics Research (IJRR)*, 2013. R Geirhos, CR Medina Temme, J Rauber, HH Schütt, M Bethge, and FA Wichmann. Generalisation in Humans and Deep Neural Networks. In *NeurIPS 2018*, pp. 7549–7561. Curran, 2019a. Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, and Wieland Brendel. ImageNet-trained CNNs are Biased Towards Texture; Increasing Shape Bias Improves Accuracy and Robustness. In *7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019*, 2019b. Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard S. Zemel, Wieland Brendel, Matthias Bethge, and Felix A. Wichmann. Shortcut learning in deep neural networks. *Nat. Mach. Intell.*, 2(11):665–673, 2020. doi: 10.1038/s42256-020-00257-z. Robert Geirhos, Kantharaju Narayanappa, Benjamin Mitzkus, Tizian Thieringer, Matthias Bethge, Felix A. Wichmann, and Wieland Brendel. Partial success in closing the gap between human and machine vision. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual*, pp. 23885–23899, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/c8877cff22082a16395a57e97232bb6f-Abstract.html. Kaiming He, X. Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. *2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 770–778, 2016. Martin N Hebart, Charles Y Zheng, Francisco Pereira, and Chris I Baker. Revealing the multidimensional mental representations of natural objects underlying human similarity judgements. *Nature human behaviour*, 4(11):1173–1185, 2020. Dan Hendrycks and Thomas Dietterich. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations. *Proceedings of the International Conference on Learning Representations*, 2019. URL https://github.com/hendrycks/robustness. Licensed with Apache-2.0 license. To view a copy of this license see https://github.com/hendrycks/robustness/blob/master/LICENSE.
ikdB0VXPlw
Regarding the experiment, the proposed method archives competitive results on KIT in terms of FID, but not other metrics or on HumanML3D. It will be helpful to discuss why is this the case. It is perfectly fine to not attain SOTA on everything, but studying the limitations can provide the community with key insights.
Motion Flow Matching for Efficient Human Motion Synthesis and Editing Anonymous authors Paper under double-blind review Abstract Human motion synthesis is a fundamental task in the field of computer animation. Recent methods based on diffusion models or GPT structure demonstrate commendable performance but exhibit drawbacks in terms of slow sampling speeds or the accumulation of errors. In this paper, we propose Motion Flow Matching, a novel generative model designed for human motion generation featuring efficient sampling and effectiveness in motion editing applications. Our method reduces the sampling complexity from 1000 steps in previous diffusion models to just 10 steps, while achieving comparable performance in text-to-motion and action-to-motion generation benchmarks. Noticeably, our approach establishes a new state-of-the-art result of Fréchet Inception Distance on the KIT-ML dataset. What is more, we tailor a straightforward motion editing paradigm named trajectory rewriting leveraging the ODE-style generative models and apply it to various editing scenarios including motion prediction, motion in-between prediction, motion interpolation, and upper-body editing. 1 Introduction Human motion generation (Guo et al., 2022a; Zhu et al., 2023) constitutes a foundational task in computer animation with diverse applications spanning computer graphics, human-computer interaction, and robotics. In contrast to unconditional motion generation (Petrovich et al., 2021; Raab et al., 2023), recent endeavors have focused on introducing different conditions for enhanced controllability, such as action name (Tevet et al., 2023), text (Chen et al., 2023; Jiang et al., 2023), audio (Yi et al., 2023), and scene (Zhang et al., 2020) inputs. On the modeling front, contemporary human motion generation primarily relies on two dominant paradigms: auto-regressive methods operating in discrete spaces (Zhang et al., 2023a) and non-auto-regressive approaches grounded in diffusion models (Chen et al., 2023; Tevet et al., 2023; Zhang et al., 2022). The former often accumulates errors and demands iterative frame generation, resulting in time-intensive processes. In contrast, the latter offers stability, efficient training, and seamless integration of guidance signals but is hindered by slow sampling speeds (Salimans & Ho, 2022). Quoting Chen et al. (2023): “a typical diffusion-based method MDM (Tevet et al., 2023) requires 24.74 seconds for average inference and up to a minute for maximum inference on a single V100.” Although various acceleration techniques have been explored, they have yet to fundamentally alter the intrinsic curvature trajectory nature of diffusion models (Ho et al., 2020; Song et al., 2021b). Recently, a novel generative model known as flow matching (Lipman et al., 2023; Liu et al., 2023b; Albergo & Vanden-Eijnden, 2023; Neklyudov et al., 2023) has garnered significant attention. This model is particularly effective at preserving straight trajectories during the generation process by... an ODE solver. It achieves this by regressing the linearly interpolated vector field in the training process. This makes it a promising alternative for addressing challenges related to trajectories, which are commonly encountered in diffusion models. Although flow matching has been explored in diverse domains including video (Aram Davtyan & Favaro, 2023), audio (Le et al., 2023), and point cloud (Wu et al., 2023), its application in the context of motion generation remains relatively unexplored. In this paper, we introduce the flow matching model into the task of human motion generation, remarkably, we achieve equivalent performance to previous methods that required 1000 sampling steps, but with a significantly reduced sampling time of only 10 timesteps, see Figure 1. Moreover, recent advancements in generative models have introduced techniques for data editing and imputation, enabling the modification and restoration of data while preserving data distributions. A typical approach for image inpainting, referred to as “replacement” as highlighted in Song et al. (2021b); Ho et al. (2022), leverages the equivalence of the forward and backward passes within diffusion models to align the sampling process with known data segments and generates the unknown portions correspondingly. Nevertheless, a similar method’s exploration within the context of flow matching has remained unexplored. In human motion generation, we utilize flow matching’s straight trajectory property to align known motion segments with the trajectory while ensuring consistency in generating unknown motions. We provide a motion prefix and sometimes also a suffix to guide our model under textual conditions for specific motion generation that preserves consistency. Additionally, we perform inpainting in the joint space, enabling semantic editing of body parts without affecting others. Our contributions encompass three key aspects. First, we introduce the Motion Flow Matching Model, which is a straightforward flow matching-based generative model for human motion generation. Our model strikes an optimal balance between generation quality and sampling steps across various tasks, including text-to-motion and action-to-motion generation. To the best of our knowledge, this is the inaugural application of flow matching in human motion generation. Second, we introduce a simple training-free editing method named “trajectory rewriting”, facilitating editing capabilities based on flow matching, not previously explored in the existing literature. These editing techniques are versatile and well-suited for in-between motion editing, upper body manipulation, as well as motion interpolation tasks. Lastly, our experimental outcomes establish state-of-the-art Fréchet Inception Distance (FID) performance on the KIT dataset with a minimal number of sampling steps. 2 RELATED WORK 2.1 DIFFUSION AND FLOW-BASED GENERATIVE MODELS Diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2021b) have found broad applications in computer vision, spanning image (Rombach et al., 2022), audio (Liu et al., 2023a), video (Ho et al., 2022; Blattmann et al., 2023), and point cloud generation (Luo & Hu, 2021). Even though they have presented high fidelity in generation, they do so at the cost of sampling speed, usually demanding thousands of sampling steps. Hence, several works propose more efficient sampling techniques for diffusion models, including distillation (Salimans & Ho, 2022; Song et al., 2023), noise schedule design (Kingma et al., 2021; Nichol & Dhariwal, 2021; Preechakul et al., 2022), and training-free sampling (Song et al., 2021a; Karras et al., 2022; Lu et al., 2022; Liu et al., 2022). Nonetheless, it is important to highlight that existing methods have not fully addressed the challenge of curve trajectory modeling within diffusion models from root, as their forward pass is inherently designed to exhibit curvature in SDE, following either a linear variance schedule (Ho et al., 2020) or a cosine schedule (Nichol & Dhariwal, 2021). A recent entrant, known as flow matching (Lipman et al., 2023; Liu et al., 2023b; Albergo & Vandenhijden, 2023; Neklyudov et al., 2023), has gained prominence for its ability to maintain straight trajectories during generation by an ODE solver, positioning it as an apt alternative for addressing trajectory-related issues encountered in diffusion models. The versatility of flow matching has been showcased across various domains, including image (Lipman et al., 2023), video (Aram Davtyan & Favaro, 2023), audio (Le et al., 2023), point cloud (Wu et al., 2023), and Riemannian manifold (Chen & Lipman, 2023). This underscores its capacity to address the inherent trajectory challenges associated with diffusion models, aligning naturally with the limitations of slow sampling in the current motion generation solutions based on diffusion models. Meanwhile, exploration of training-free editing techniques has been a significant focus within unconditional diffusion models, such as SDEdit (Meng et al., 2021) and ILVR (Choi et al., 2021), all of which rely on generative priors. Similarly, in the realm of conditional diffusion models, methods relying on cross-attention (Hertz et al., 2022; Mokady et al., 2023) have been employed. However, it’s worth noting that these approaches predominantly center around SDE-based methods. Conversely, the exploration of ODE-based editing, particularly within the context of newly proposed flow matching models, remains relatively underexplored. This serves as a compelling motivation for our investigation into human motion synthesis. 2.2 Human Motion Generation Human motion synthesis involves generating diverse and realistic human-like motion. The data can be represented using either keypoint-based (Zhang et al., 2021; Zanfir et al., 2021; Ma et al., 2023) or rotation-based (Loper et al., 2023; Pavlakos et al., 2019) representations. In this paper, we choose the rotation-based representation due to its representation efficiency resulting from the inductive bias of the human kinematic tree. In addition to unconditional motion generation, conditional inputs are used such as text (Petrovich et al., 2022; Zhang et al., 2022; Tevet et al., 2023; Guo et al., 2022b; Jiang et al., 2023), action (Petrovich et al., 2021; Guo et al., 2020; Tevet et al., 2023; Chen et al., 2023), and incomplete motion (Ma et al., 2022; Tevet et al., 2023). In this work, we mainly explore the text condition as the most informative and user-applicable medium. MDM (Tevet et al., 2023) proposes a diffusion-based generative model (Ho et al., 2020) separately trained on several motion tasks. MotionGPT (Jiang et al., 2023) presents a principled approach to the interaction between motion and language. RemoDiffuse (Zhang et al., 2023b) explores motion generation from a retrieval perspective, drawing inspiration from Blattmann et al. (2022). T2M-GPT (Zhang et al., 2023a) investigates a generative framework based on VQ-VAE and a Generative Pre-trained Transformer (GPT) for motion generation. MLD (Chen et al., 2023) advances the latent diffusion model (Rombach et al., 2022) to generate motions based on different conditional inputs. Our work introduces a novel model into human motion synthesis, with a primary objective of efficient sampling. Additionally, the motion completion task generates motions given partial inputs, such as classical motion prediction (Zhang et al., 2021; Ma et al., 2022) or motion in-between (Tevet et al., 2023). Prior research efforts have primarily explored either SDE-style diffusion models (Tevet et al., 2023) or classical distribution alignment techniques (Ma et al., 2022). In contrast, our paper takes a novel approach by examining it from the perspective of the ODE sampling process. 3 Method In this section, we delve into the background of the motion generation task and the novel generative models. We also introduce the framework, as illustrated in Figure 2. Finally, we present our training-free trajectory rewriting techniques, customized for the unique generative model. 3.1 Motion Flow Matching Our primary goal is to synthesize a human motion \( x \), with a condition \( c \) such as a natural language description or an action label. The human motion is formed as a sequence of human poses \( x = \{x_i\}_{i=1}^M \), where the pose in each frame \( x_i \) is represented by the 3D position or rotation of joints. To achieve this, we build a generative model \( f \) parametrized by \( \theta \) to synthesize the motion \( x = f(z, c; \theta) \), given \( z \) as a sampled Gaussian noise vector. Flow matching generation. The generative model \( f(\cdot) \) can be expressed using either an autoregressive style (Zhang et al., 2023a) or non-autoregressive models (Tevet et al., 2023). However, both approaches face challenges, such as error accumulation (Gong et al., 2022) or slow sampling speed. To alleviate those issues, we have chosen to adopt a new generative model called Flow Matching. Given a set of samples from an unknown data distribution \( q(x) \), the goal is to learn a flow that pushes the simple prior density \( p_0(x) = N(x | 0, 1) \) towards a more complicated distribution \( p_1(x) \approx q(x) \) along the probability path \( p_t(x) \). Formally, this is denoted using the push-forward operation as \( p_t = [\phi_t]_* p_0 \). Following this definition, the motion data \( x \) is represented as \( x_{t=0} \) or \( x_0 \) while the noise vector \( z \) that generates this motion is denoted as \( x_{t=0} \) or \( x_0 \). The time-dependent flow can be constructed via a vector field \( v(x, t) : \mathbb{R}^d \times [0, 1] \rightarrow \mathbb{R}^d \) that defines the flow via the neural ordinary differential equation (ODE): \[ \dot{\phi}_t(x) = v(\phi_t(x), t), \quad \phi_0(x) = x_0. \] (1) Figure 2: Our Motion Flow Matching framework for human motion synthesis and editing. **Synthesis:** starting with a motion feature \( x_1 \) and a randomly sampled prior Gaussian \( x_0 \), we gradually corrupt the motion feature using simple interpolation: \( x_t = tx_1 + (1-t)x_0 \) (omitting \( \sigma_{\text{min}} \) for simplicity). **Editing:** in editing tasks, we aim to synthesize unknown dimensions in the motion representation while keeping the known components. We sample a Gaussian vector \( x_0 \) and apply trajectory rewriting to the known part of the motion as \( [x_t]_{\text{known}} = (1-t)[x_0]_{\text{known}} + t[x_1]_{\text{known}} \) while sampling an ODE solver to adapt the trajectory of the unknown part. As such, the flow \( v(x_t, t) \) is continuously updated with the partially rewritten \( x_t \), enabling motion editing in various scenarios. Given a predefined probability density path \( p_t(x) \) and the corresponding vector field \( w_t(x) \), one can parameterize \( v(x_t, t) \) with a neural network, parameterized by \( \theta \), and solve \[ \min_{\theta} \mathbb{E}_{t, p_t(x)} \| v(x_t, t; \theta) - w_t(x) \|^2. \] (2) **Framework.** Noticeably, directly optimizing Equation (2) is infeasible, because we do not have access to \( w_t(x) \) in closed form. Instead, Lipman et al. (2023); Liu et al. (2023b); Albergo & Vanden-Eijnden (2023); Neklyudov et al. (2023) propose to use the conditional vector field \( w_t(x | x_1) \) as the target, which corresponds to the conditional flow \( p_t(x | x_1) \). Importantly, they show that this new **conditional Flow Matching** objective \[ \min_{\theta} \mathbb{E}_{t, p_t(x | x_1), q(x_1)} \| v(x_t, t; \theta) - w_t(x | x_1) \|^2, \] (3) has the same gradients as Equation (2). By defining the conditional probability path as a linear interpolation between \( p_0 \) and \( p_1 \), all intermediate distributions are Gaussians of the form \( p_t(x | x_1) = \mathcal{N}(x | tx_1 + [1 - (1 - \sigma_{\text{min}})t]x_0, 1 - (1 - \sigma_{\text{min}})^2) \), where \( \sigma_{\text{min}} > 0 \) is a small amount of noise around the sample \( x_1 \): \( x_t = tx_1 + [1 - (1 - \sigma_{\text{min}})t]x_0 \). The corresponding target vector field is: \[ w_t(x | x_1) = x_1 - (1 - \sigma_{\text{min}})x. \] (4) Learning the straight trajectory improves the training and sampling efficiency compared to diffusion paths. When we need extra condition signals \( c \), we can directly insert them into the vector field estimator \( v(x_t, t, c; \theta) \). Overall, the framework of flow matching allows it to generate samples by first sampling \( x_0 \sim \mathcal{N}(x | 0, 1) \) and then solving Equation (1) using an off-the-shelf numerical ODE solver (Runge, 1895; Kutta, 1901; Alexander, 1990). In the end, we can formulate \( f \) as \( x = f(x_0) = \text{ODESolve}(x_0, c; \theta)_{t:0 \rightarrow 1} \). **Sampling.** After the training of the neural velocity field \( v(x_t, t, c; \theta) \), the generation of samples is facilitated through practical discretization of the ordinary differential equation (ODE) process outlined in Equation (1) by employing an ODE solver. Using the Euler ODE solver as an illustration, this discretization method entails dividing the process into \( N \) steps, resulting in the following expression: \[ x_{(\hat{t}+1)/N} \leftarrow x_{\hat{t}/N} + \frac{1}{N} v(x_{\hat{t}/N}, \hat{t}/N, c; \theta), \] (5) where the integer time step \( \hat{t} = 0, 1, \ldots, N-1 \) such that \( t = \hat{t}/N \). Additionally, for enhanced efficiency in sampling, alternative approaches such as adaptive step-size ODE solvers Runge (1895); Kutta (1901) can be considered, which can significantly reduce the computational time required. Given that vector field regression in flow matching emulates noise prediction techniques used in diffusion models (Zheng et al., 2023), we further investigate the incorporation of classifier-free guidance (Ho & Salimans, 2021) in flow matching. This entails introducing random dropout to the conditional signals. In practice, our network learns both the conditioned and unconditioned distributions by randomly setting \( c = \emptyset \) for 10% of the samples. This configuration effectively causes \( v(x_t, t, \emptyset; \theta) \) to approximate \( p(x_t) \), signifying that a predominant portion of the network’s capacity is dedicated to conditional sampling (90%) rather than unconditional sampling (10%). Subsequently, we conduct the sampling according to the equation: \[ v_s(x_t, t, c; \theta) = v(x_t, t, \emptyset; \theta) + s \cdot (v(x_t, t, c; \theta) - v(x_t, t, \emptyset; \theta)), \] where \( s \) indicates the guidance strength that balances the trade-off between diversity and fidelity. The network. The dynamics of the flow matching model \( \theta \) is founded upon a simple encoder-only architecture based on the Transformer (Vaswani et al., 2017). The Transformer architecture is meticulously designed to possess temporal awareness, facilitating the acquisition of knowledge pertaining to motions of varying durations. Its efficacy within the motion domain has been empirically substantiated (Tevet et al., 2023; Duan et al., 2021; Aksan et al., 2021). For model inputs, \( x \), the time-step \( t \), and the condition code \( c \), undergo individual fully connected projections into the Transformer dimension via feed-forward networks. These projections are subsequently aggregated to yield the token \( h_{[x_t, t, c]} \). Each frame of the noisy input data \( x_t \) is subject to linear projection into the Transformer dimension and is summed with a positional embedding. The detailed structure can be found in Appendix Figure 10. 3.2 Motion Editing Editing task. We mainly explore the following editing operations: 1) Motion in-between based on prefix and suffix in the temporal domain, 2) Motion prediction based on prior prefix motion in the temporal domain, 3) Motion interpolation with a gap of frames, and 4) Editing body parts in the spatial domain. The editing operations involve only the sampling process in inference, without any additional training steps. In temporal editing (in-between and motion prediction), the input consists of the prefix and suffix frames of the motion sequence. In the spatial setting, we hope that body parts can be re-synthesized while preserving the rest of the body. Editing can be performed conditionally or unconditionally, with the option to set \( c = \emptyset \) in the latter case. Motion editing by trajectory rewriting. In diffusion models, a technique known as “replacement” is employed during the sampling process to address data imputation challenges (Ho et al., 2022; Song et al., 2021b). The network gradually corrupts data following either a variance-preserving (VP-SDE) (Ho et al., 2020) or variance-explosion (VE-SDE) approach (Song et al., 2021b). In contrast, flow matching takes a fundamentally different approach such that it initially knows the target distribution sample \( x_0 \) and then “causalizes” the intermediate data sample \( x_t \) through simple interpolation: \( x_t = tx_1 + [1 - (1 - \sigma_{min})t]x_0 \). This stands in stark contrast to diffusion models, where \( x_0 \) becomes known only after conducting sufficient stochastic forward steps to sample it. Editing tasks aim to preserve known parts during editing while generating the unknown parts in a manner consistent with the known ones. Formally, let \( M \) denote a binary boolean mask with the same dimensions as the motion representation \( x_1 \), such that \( M = 1 \) indicates known corresponding dimension in the given motion \( x_1 \), and \( M = 0 \) otherwise. Utilizing the principle of pursuing a straight trajectory in flow matching, we consistently enforce the known dimensions as the linear interpolation between \( x_0 \) and \( x_1 \) during sampling steps, while adapting the trajectory of the unknown dimensions from noise. Specifically, in contrast to the standard process for motion synthesis in Equation (5), the sampling process for editing is formalized as: \[ \tilde{x}_{i/N}^{'} \leftarrow (1 - M) \cdot \tilde{x}_{i/N} + M \cdot \left( (1 - \frac{t}{N})x_0 + \frac{t}{N}x_1 \right), \] \[ \tilde{x}_{(i+1)/N}^{'} \leftarrow \tilde{x}_{i/N}^{'} + \frac{1}{N} v(\tilde{x}_{i/N}^{'}, \frac{t}{N}, c; \theta). \] where \( x_0 \) is a random sampled Gaussian noise that aims to match with the target data \( x_1 \). The intermediate results are denoted with \( \tilde{x}_{i/N}^{'} \) to discriminate from standard sampling in Equation (5), and \( \tilde{x}_{i/N}^{'} \) denotes the manipulated \( \tilde{x}_{i/N} \) during the sampling process. Algorithm 1 Euler Sampling algorithm with Trajectory Rewriting. 1: **Input:** \( x_1 \) the original motion (or partial data with valid known dimensions); \( M \) the boolean mask indicating known / unknown dimensions in the motion; \( v \) and \( \theta \) the vector field predictor with pretrained parameters 2: **Parameters:** \( N \) the number of sampling steps; \( \zeta \) the threshold when trajectory rewriting stops. 3: Sample \( x_0 \sim \mathcal{N}(0, 1) \) from the Gaussian distribution, \( \tilde{x}_0 = x_0 \) at \( t = 0 \). 4: **for** \( i = 1, 2, ..., N - 1 \) **do** 5: **if** \( \frac{i}{N} < \zeta \) **then** 6: Rewrite \( \tilde{x}_{i/N}' \leftarrow (1 - M) \cdot \tilde{x}_{i/N} + M \cdot \left( (1 - \frac{i}{N})x_0 + \frac{i}{N} x_1 \right) \) 7: \( \tilde{x}_{(i+1)/N} \leftarrow \tilde{x}_{i/N}' + \frac{1}{N} v(\tilde{x}_{i/N}', \frac{i}{N}, c; \theta) \). 8: **else** 9: \( \tilde{x}_{(i+1)/N} \leftarrow \tilde{x}_{i/N} + \frac{1}{N} v(\tilde{x}_{i/N}, \frac{i}{N}, c; \theta) \). 10: **end if** 11: **end for** 12: **Return:** The motion after editing \( \tilde{x}_{N/N} = \tilde{x}_1 \). Table 1: Comparison with state-of-the-art methods on the KIT-ML (Plappert et al., 2016) test set. RP Top3 denotes R-Precision Top3. NFE denotes the number of function evaluations. \( \rightarrow \) indicates that closer to real is better. | Methods | NFE ↓ | RP Top3 ↑ | FID ↓ | MM-Dist ↓ | Diversity → | MModality ↑ | #params | |-----------------------|-------|-----------|-------|-----------|-------------|-------------|--------| | Real motion | - | 0.779 ± .006 | 0.031 ± .004 | 2.788 ± .012 | 11.08 ± .097 | - | | | TM2T Guo et al. (2022b) | - | 0.587 ± .005 | 3.599 ± .153 | 4.591 ± .026 | 9.473 ± .117 | 3.292 ± .081 | 317M | | Guo et al. (2022a) | - | 0.681 ± .007 | 3.022 ± .107 | 3.488 ± .028 | 10.72 ± .145 | 2.052 ± .107 | 181M | | T2M-GPT | - | 0.716 ± .006 | 0.737 ± .049 | 3.237 ± .027 | 11.198 ± .086 | 2.309 ± .055 | 247.6M | | MDM Tevet et al. (2023)| 1,000 | 0.396 ± .004 | 0.497 ± .021 | 9.191 ± .022 | 10.847 ± .109 | 1.907 ± .214 | 23M | | MotionDiffuse | 1,000 | 0.739 ± .004 | 1.954 ± .062 | 2.958 ± .005 | 11.100 ± .143 | 0.730 ± .013 | 238M | | MLD Chen et al. (2023)| 50 | 0.734 ± .007 | 0.404 ± .027 | 3.204 ± .027 | 10.800 ± .117 | 2.192 ± .071 | 26.9M | | Our MFM | 10 | 0.414 ± .006 | 0.359 ± .034 | 9.030 ± .043 | 11.310 ± .102 | 1.220 ± .079 | 17.9M | | Our MFM | 50 | 0.415 ± .006 | 0.193 ± .020 | 9.041 ± .013 | 11.080 ± .108 | 1.490 ± .056 | 17.9M | In our experiments, we found that editing operations do not need to be employed throughout the entire ODE sampling process. Restricting the trajectory rewriting operation only in early time steps suffices to ensure consistent generation, while granting us more flexibility for editing. Specifically, we set \( \zeta \in [0, 1] \) as a threshold such that the rewriting is only applied before that step \( t = \frac{i}{N} < \zeta \). Empirically we set \( \zeta = 0.2 \) throughout this work. The complete process trajectory rewriting for Euler sampling is shown in Algorithm 1. Under the property of a straight trajectory, the completed unknown part of the motion correctly exhibits the desired marginal distribution by design, and naturally aligns with the known part due to the fully optimized vector field estimator. 4 EXPERIMENT 4.1 DATASETS AND EXPERIMENTAL DETAILS Our experimental evaluations are conducted on three established datasets commonly employed for human motion generation tasks: HumanML3D (Guo et al., 2022a), KIT Motion-Language (KIT-ML) (Plappert et al., 2016) for text-to-motion generation, and an additional action-to-motion generation dataset, HumanAct12 (Guo et al., 2020). We adhere to the evaluation protocols outlined in Guo et al. (2022a). We opt for a motion representation following Guo et al. (2022a) for its effectiveness in encoding the motion kinematics. More details are introduced in Appendix E.2. Similar to Guo et al. (2022a), the dataset KIT-ML and HumanML3D are extracted into motion features with dimensions 251 and 263 respectively, which correspond to local joints position, velocity, and rotations in root space as well as global translation and rotations. These features are computed from 21 and 22 joints of SMPL (Loper et al., 2023). Our evaluation metrics encompass five key aspects. 1). We evaluate the general number of function evaluations (NFE), denoting the average network forward number. 2). To assess the parameter efficiency of the models, we investigate the number of parameters they contain. 3). For motion quality assessment, we rely on the Frechet Inception Distance (FID), utilizing a feature extractor (Guo et al., 2022a) to measure the distance between feature distributions of generated and real motions. 4). To Table 2: Comparison with state-of-the-art methods on the HumanML3D (Guo et al., 2022a) test set. RP Top3 denotes R-Precision Top3. NFE denotes the number of function evaluations. → indicates that closer to real is better. | Methods | NFE ↓ | RP Top3↑ | FID ↓ | MM-Dist ↓ | Diversity → | MModality ↑ | #params | |------------------|-------|----------|-------|-----------|-------------|-------------|---------| | Real motion | - | 0.797±.002 | 0.002±.000 | 2.974±.008 | 9.503±.065 | - | | | TM2T Guo et al. (2022b) | - | 0.729±.002 | 1.501±.017 | 3.467±.011 | 8.589±.076 | 2.424±.093 | 317M | | Guo et al. (2022a) | - | 0.736±.002 | 1.087±.021 | 3.347±.008 | 9.175±.083 | 2.219±.074 | 181M | | T2M-GPT | - | 0.685±.003 | 0.140±.006 | 3.730±.009 | 9.844±.095 | 3.285±.070 | 247.6M | | MotionGPT | - | 0.778±.002 | 0.232±.008 | - | 9.520±.071 | 2.008±.084 | 220M | | MDM Tevet et al. (2023) | 1,000 | 0.611±.007 | 0.544±.044 | 5.566±.027 | 9.559±.086 | 2.799±.072 | 23M | | MotionDiffuse | 1,000 | 0.782±.001 | 0.630±.001 | 3.113±.001 | 9.410±.049 | 1.553±.042 | 238M | | MLD Chen et al. (2023) | 50 | 0.772±.002 | 0.473±.013 | 3.196±.000 | 9.724±.082 | 2.413±.070 | 26.9M | | Our MFM | 10 | 0.642±.003 | 0.362±.006 | 5.280±.009 | 9.860±.095 | 2.443±.070 | 17.9M | Table 3: Evaluation of trajectory rewriting editing. We edit our motion by randomly generating 5,000 motions, and compare it with the ground truth. ADE and FDE are joint distances between generation and ground truth. | Prediction | Upper body | In-between | |------------|------------|------------| | | FID ↓ | ADE ↓ | FDE ↓ | FID ↓ | ADE ↓ | FID ↓ | ADE ↓ | | MDM Tevet et al. (2023) | 7.34 | 5.90 | 7.50 | 8.40 | 5.40 | 3.43 | 4.73 | | Our MFM | 5.79 | 4.99 | 5.50 | 6.46 | 4.12 | 2.59 | 3.32 | To gauge generation diversity, we employ the Diversity metric, which quantifies motion diversity by calculating variance in features extracted from the motions, along with MultiModality (MModality) for assessing diversity within generated motions under the same text description. In terms of text alignment, we utilize the motion-retrieval precision (R Precision) to evaluate the accuracy of matching texts and motions using Top3 retrieval accuracy, while Multi-modal Distance (MM Dist) measures the distance between motions and texts, all based on feature spaces from Guo et al. (2022a). We use AdamW (Loshchilov & Hutter, 2019) optimizer with $[\beta_1, \beta_2] = [0.9, 0.999]$, batch size of 256. We train with a learning rate of $1e^{-4}$. We employ a timestep of $N = 10$ for Euler ODE sampling. However, for trajectory rewriting, we opt for a slightly larger value of $N = 30$, but restrict the editing to the initial $t = 0.2$ timesteps, effectively modifying the first 6 timesteps only. More details about implementation and evaluation metrics are provided in the Appendix. 4.2 Main Result Text-to-Motion. In the text-to-motion generation, we present our results on the KIT dataset in Table 1 and on the HumanML3D dataset in Table 2. Our approach attains state-of-the-art performance in FID on the KIT dataset while requiring minimal function evaluations and a modest number of parameters. These tables clearly highlight the successful achievement of a favorable balance between sampling steps (NFE) and generation performance by our method. Notably, GPT-based methods such as T2M-GPT (Zhang et al., 2023a) and MotionGPT (Jiang et al., 2023), which rely on token prediction, tend to require a higher number of network forward evaluations (NFEs), equivalent to the number of tokens, compared to our approach, which uses 10 NFEs. This suggests that our method may be more computationally efficient in terms of NFEs required for motion generation. In Figure 3, we offer a qualitative comparison with three baseline methods: MDM (Tevet et al., 2023), MLD (Chen et al., 2023), and T2M-GPT (Zhang et al., 2023a), where we use the same guidance strength $s = 2.5$. Our results exhibit enhanced capabilities in capturing the nuanced motion dynamics derived from the input prompts as compared to these baseline approaches. For additional qualitative results in text-to-motion synthesis, please refer to Appendix Figure 8. Sampling steps. In Figure 1, We investigate the relationship between the number of sampling steps and FID (Fréchet Inception Distance) using the following baselines on the KIT-ML dataset. 1). MDM (Tevet et al., 2023) with DDPM sampling. 2). MDM with DDIM sampling (Song et al., 2021a). 3). MLD Chen et al. (2023) with DDIM sampling. MDM fails to achieve reasonable FID due to the design. Our method converges to a lower FID at the same sampling steps. Our approach is also significantly faster than MLD (Chen et al., 2023) and MDM (Tevet et al., 2023). Figure 3: **Qualitative comparison with baselines.** The generated motion is represented by the bronze frames. Please take note of the dotted red rectangle; occasionally, our baseline may struggle to discern the guidance from the prompt, resulting in less fluid motion. being approximately 5 times faster than MLD and 100 times faster than MDM. It is worth noting that our Transformer-based architecture can be further optimized using FlashAttention (Dao et al., 2022). **Action-to-Motion.** We further showcase our action-to-motion generation results in Appendix Table 4. Our method consistently achieves on-par results with baselines, while requiring significantly fewer function evaluations and better parameter-efficiency, underscoring the efficacy of our approach. ### 4.3 Motion Editing by Trajectory Rewriting **Qualitative result.** In Figure 4, we discover that editing operations can effectively take place within the previous 0.2 time steps other than the full 1.0 time steps. Furthermore, we provide a demonstration of $x_1$ estimation at intervals of 0.1 time steps. Remarkably, we observe that even the initial $x_t$ can yield reasonably accurate estimations of $x_1$. As time progresses, these estimations gradually align more closely with the provided prompt, eventually stabilizing around $t = 0.2$. This phenomenon underscores the straight trajectory characteristics of our models. Moreover, we investigate several editing operations including in-between, future prediction, upper body, and interpolation editions, as illustrated in Figure 5. We examine the alteration of the text prompt while retaining the known part, serving as a test to evaluate whether our generative models can consistently produce motion that aligns with the preserved known segment. **Quantitative result.** In our comparison with the baseline method MDM (Tevet et al., 2023) as illustrated in Table 3, we evaluate our trajectory rewriting approach through identical editing operations, revealing its slight performance superiority across FID, Average Displacement Error (ADE), and Final Displacement Error (FDE) metrics, commonly used in motion prediction studies (Zhang et al., 2021). More details can be found in Appendix D. ## 5 Conclusion In this work, we have introduced a straightforward yet highly effective generative model called Flow Matching to the realm of human motion synthesis. Our results demonstrate a remarkable balance between generation fidelity and sampling steps. Leveraging the inherent property of straight trajectories, we have devised a simple trajectory rewriting technique for training-free editing. In our future endeavors, we intend to extend this trajectory rewriting technique to other domains, such as image editing. Figure 4: Above is a comparison of editing times for the motion prediction task. Rewriting the trajectory up to $t = 0.2$ achieves nearly identical performance compared to rewriting up to $t = 1.0$. Below is the motion prediction: estimation of $x_1$ during trajectory rewriting from $t = 0.0$ to $t = 1.0$. It’s worth noting that from the very first time steps, the model can already produce reasonably accurate motion. Furthermore, as time progresses, the estimation of $x_1$ gradually aligns more closely with the provided prompt. Remarkably, by the time we reach time step $t = 0.2$, the generated human motion exhibits a high level of alignment with the prompt. Light blue frames denote motion input, while bronze frames signify generated motion. The gradient of colors, ranging from light to dark, signifies the passage of time. Figure 5: Motion Editing by trajectory rewriting. We focus on showcasing four key editing scenarios in human motion generation: 1) In-between editing, 2) Motion prediction from a partial sequence, 3) Upper body editing while keeping lower body joints fixed, and 4) Interpolating missing motion frames using specified motions. Additionally, we conduct rotations of the views to obtain a more comprehensive global perspective. REFERENCES Emre Aksan, Manuel Kaufmann, Peng Cao, and Otmar Hilliges. A spatio-temporal transformer for 3d human motion prediction. In *3DV*, 2021. Michael S Albergo and Eric Vanden-Eijnden. Building normalizing flows with stochastic interpolants. In *ICLR*, 2023. Roger Alexander. Solving ordinary differential equations i: Nonstiff problems (e. hairer, sp norsett, and g. wanner). *Siam Review*, 1990. Sepehr Sameni Aram Davtyan and Paolo Favaro. Efficient video prediction via sparsely conditioned flow matching. In *ICCV*, 2023. Andreas Blattmann, Robin Rombach, Kaan Oktay, and Björn Ommer. Retrieval-augmented diffusion models. In *NeurIPS*, 2022. Andreas Blattmann, Robin Rombach, Huan Ling, Tim Dockhorn, Seung Wook Kim, Sanja Fidler, and Karsten Kreis. Align your latents: High-resolution video synthesis with latent diffusion models. In *CVPR*, 2023. Pablo Cervantes, Yusuke Sekikawa, Ikuro Sato, and Koichi Shinoda. Implicit neural representations for variable length human motion generation. In *ECCV*, 2022. Ricky TQ Chen and Yaron Lipman. Riemannian flow matching on general geometries. *arXiv*, 2023. Xin Chen, Biao Jiang, Wen Liu, Zilong Huang, Bin Fu, Tao Chen, and Gang Yu. Executing your commands via motion diffusion in latent space. In *CVPR*, 2023. Jooyoung Choi, Sungwon Kim, Yonghyun Jeong, Youngjune Gwon, and Sungroh Yoon. Ilvr: Conditioning method for denoising diffusion probabilistic models. In *ICCV*, 2021. Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. Flashattention: Fast and memory-efficient exact attention with io-awareness. In *NeurIPS*, 2022. Yinglin Duan, Tianyang Shi, Zhengxia Zou, Yenan Lin, Zhehui Qian, Bohan Zhang, and Yi Yuan. Single-shot motion completion with transformer. *arXiv*, 2021. Dayoung Gong, Joonseok Lee, Manjin Kim, Seong Jong Ha, and Minsu Cho. Future transformer for long-term action anticipation. In *CVPR*, 2022. Chuan Guo, Xinxin Zuo, Sen Wang, Shihao Zou, Qingyao Sun, Annan Deng, Minglun Gong, and Li Cheng. Action2motion: Conditioned generation of 3d human motions. In *ACM-MM*, 2020. Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, and Li Cheng. Generating diverse and natural 3d human motions from text. In *CVPR*, 2022a. Chuan Guo, Xinxin Zuo, Sen Wang, and Li Cheng. Tm2t: Stochastic and tokenized modeling for the reciprocal generation of 3d human motions and texts. In *ECCV*, 2022b. Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Prompt-to-prompt image editing with cross attention control. *arXiv*, 2022. Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. In *NeurIPS Workshop*, 2021. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In *NeurIPS*, 2020. Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. Video diffusion models. In *arXiv*, 2022. Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, and Tao Chen. Motiongpt: Human motion as a foreign language. In *NeurIPS*, 2023. Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. In *NeurIPS*, 2022.
s6X3s3rBPW
Or, if we are assessing Subject Knowledge, can't that be done by MCQA, which doesn't require expert annotation once the benchmark is created? If we are assessing programming, can't we check that with pre-specified unit tests? Indeed, the appendix shows that the unidentifiable datasets used in this paper are multiple choice, as opposed to free response. In these cases, how many forward passes through the model are we actually saved by CAT, and is that substantial? I can't tell from the paper. But I don't think it works to motivate a paper by saying
Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective Anonymous authors Paper under double-blind review Abstract Large language models (LLMs), like ChatGPT, have shown human-level cognitive ability. Benchmarks from various fields (e.g., Literature, Biology and Psychology) are often used to measure LLM’s ability and report standard metrics such as accuracy, recall and F1. However, such method for evaluating LLMs can be inefficient and inaccurate from the cognitive science perspective. Inspired by Computerized Adaptive Testing (CAT) used in psychometrics, we propose an adaptive testing framework for LLM evaluation. Rather than using a standard test set and simply reporting accuracy, this approach dynamically adjusts the characteristics of the test questions, such as difficulty, based on the model’s performance. This allows for a more accurate estimation of the model’s abilities, using fewer questions. More importantly, it allows LLMs to be compared with humans easily, which is essential for NLP models that aim for human-level ability. Our diagnostic reports have found that ChatGPT often behaves like a “careless student”, prone to slip and occasionally guessing the questions. We conduct a fine-grained diagnosis and rank 6 commercial instruction-tuned LLMs from three aspects of Subject Knowledge, Mathematical Reasoning, and Programming, where GPT4 can outperform other models significantly and reach the cognitive ability of middle-level students. Different tests for different models using efficient adaptive testing — we believe this will become the new norm in large language model evaluation. 1 Introduction In recent months, large language models (LLMs) have subverted people’s perception of NLP model with their powerful capabilities. To fully understand it, an increasing number of researchers have focused their efforts on evaluating its abilities in various aspects. In addition to traditional NLP benchmarks, LLM has shown incredible human-level performance in writing, examination, programming, etc (OpenAI, 2023a). We believe this is just the tip of the iceberg of its latent knowledge. Recent instruction-tuned LLMs (e.g., ChatGPT) have emerged human-level ability, thus more and more professional and academic exams in various subjects are used to test them, which are originally designed for humans (Figure 1(a)). However, traditional evaluation methods (Qin et al., 2023; Orzechowski & Moore, 2022; Drummond & Japkowicz, 2010; Hernández-Orallo et al., 2021) relying on a fixed exam/benchmark are not efficient for the following reasons: It usually requires many experts in the corresponding domain to score every single response of LLM, especially for the subjective or creative questions. For example, GPT4 official technical report (OpenAI, 2023a) covers more than 30 academic exams, such as History, Literature, Biology and Psychology. Although more evaluations are resorting to crowdsourcing or even LLMs themselves (Li et al., 2023; Tornberg, 2023; Chang et al., 2023), its professionalism, proficiency, and biases are the destabilizing factors. Meanwhile, for today’s generative NLP models, the inference overhead can not be negligible. Even for the old GPT3, it needs to generate the response on a 175 billion parameters model token by token. Recent GPT4 limits the frequency of API requests and charges at least 0.03$ for 1K tokens (OpenAI, 2023b), increasing the overhead of evaluation. To address these issues, we introduce a promising testing method known as Computerized Adaptive Testing (CAT) (Linden et al., 2000), a system widely employed in educational assessment, for the evaluation of LLMs. CAT’s primary goal is to measure examinee’s ability accurately while reducing the test length, which has been widely used in various standardized tests (e.g., GRE and GMAT). It Figure 1: Traditional evaluation method vs Adaptive testing. (a) LLMs need to answer the same questions, and many experts are required to score their responses. (b) In adaptive testing, CAT can adaptively select few and best-fitting questions and generate their diagnostic reports. is a sequential and iterative framework, using the acclaimed Cognitive Diagnosis Model (e.g., Item Response Theory (IRT) (Embretson & Reise, 2013)) in psychometrics to estimate the current ability of the examinee based on their previous responses. Following this, the adaptive question selection algorithm can pick the next appropriate/valuable items based on specific informativeness metrics (Lord, 2012; Chang & Ying, 1996; Bi et al., 2020), e.g., selecting the one with difficulty closest to his/her current ability estimate. As such, if CAT perceives an underestimate of the examinee’s ability, it will opt for a more challenging question in the next step, and vice versa. Compared to traditional paper-and-pencil tests, CAT has been proven to require fewer questions to achieve the same measurement accuracy (i.e., evaluation efficiency) (Lan et al., 2014; Vie et al., 2017). Our objective is to establish an adaptive and efficient evaluation framework for LLMs. As illustrated in Figure 1(b), we treat LLM as a real student and tailor an “exam” to accurately estimate its ability. Compared to traditional evaluation methods (e.g., fixed benchmarks and case studies (Zhuo et al., 2023; Huang et al., 2023)), it provides us with a scientific solution for measuring the cognitive ability level of LLMs, greatly reducing costs (e.g., labor costs and computational overhead). Our main contributions are as follows: • We formally introduce CAT into the evaluation of LLMs and propose a practical two-stage adaptive evaluation framework, which enables the efficient comparison between model and model, model and human. Different from the traditional fixed-benchmark evaluation, it requires much fewer questions/samples under the same ability estimation accuracy. • Model vs Human: We compare ChatGPT with human of different levels: we found that ChatGPT often behaves like a “careless student” who is prone to slip and occasionally guesses questions. Although there is still a gap with high-ability humans, especially in mathematical reasoning, ChatGPT’s programming ability in Dynamic Programming and Search has surpassed the high-ability college students. • Model vs Model: We study 6 instruction-tuned LLMs and provide their fine-grained diagnosis reports on three aspects: subject knowledge, mathematical reasoning, and programming level. Through comparison, it is found that GPT4 surpasses other large models with significant advantages. 2 RELATED WORKS Computerized Adaptive Testing (CAT) is a complex system (Linden et al., 2000), which includes two core algorithms: Item Response Theory (IRT) and question selection algorithm. At each test step $t \in [1, 2, ..., T]$, these two algorithms work alternately until the stopping rule is met. When the test stops ($t = T$), the estimated ability of individual examinees $\hat{\theta}^T$ will be fed back to themselves for facilitating future learning, or as the basis/result of this assessment. The goal of CAT is to accurately estimate examinee’s true ability $\theta_0$, i.e., $\|\hat{\theta}^T - \theta_0\| \to 0$, while minimizing $T$ (i.e., the number of questions asked) (Chang, 2015). The following reviews these two algorithms. **Item Response Theory.** Item Response Theory (IRT) is built on psychometrics and cognitive science, which is used for ability estimation in several state assessments, such as the National Assessment of Educational Programs (Ravitch, 1995) and OECD/PISA Project (Harlen, 2001). There are many different IRT implementations, the simplest of which is the one-parameter logistic form: $$\Pr(\text{the response to question } j \text{ is correct}) = \text{sigmoid}(\theta - \beta_j).$$ This model represents the behavior of an examinee with a single latent trait $\theta$, called ability, and the questions with a single parameter $\beta$, called difficulty. Note that the characteristics of each question (e.g., difficulty) should be pre-calibrated before CAT by fitting a joint model of human ability and item characteristics to human response patterns to the test questions (Embretson & Reise, 2013). Although more and more neural network-based IRT and cognitive diagnosis models (Wang et al., 2020, 2021; Gao et al., 2021) have been designed recently for ability/proficiency estimation, we choose the IRT in logistic function considering its versatility and interpretability in this paper. With its reliability in model evaluations (Rodriguez et al., 2021), IRT itself has been widely used to evaluate NLP systems, e.g., textual entailment recognition (Lalor et al., 2016), chatbots (Sedoc & Ungar, 2020), and machine translation (Hopkins & May, 2013; Otani et al., 2016). **Selection Algorithms.** The selection algorithm is the core component to realize CAT’s adaptivity – accurately estimating examinee’s ability with the fewest test steps. Commonly, these algorithms are based on some uncertainty or information metrics. The most widely used is Fisher Information metric (FSI) (Lord, 2012; Hooker et al., 2009), designed for IRT, which selects the next question that can minimize the uncertainty/variance of estimation. Based on FSI, many improved methods (Chang & Ying, 1996; Rudner, 2002; van der Linden, 1998; Zhuang et al., 2022a) have been proposed to introduce additional information in selection. Recently, Active learning and Reinforcement Learning (RL) are also used to select important/suitable items from the question bank (Bi et al., 2020; Nurakhmetov, 2019; Li et al., 2020; Ghosh & Lan, 2021; Zhuang et al., 2022b). Taking into account both theoretical guarantees and interpretability, the Fisher method is the first choice for the evaluation of LLMs in this paper. ### 3 Evaluation Framework for LLMs In this section, we take ChatGPT as an example to introduce our adaptive evaluation framework for LLMs in detail (Figure 2). Instead of comparing on the unseen gold-standard test dataset, this method can use CAT to (1) realize the comparison of ChatGPT and humans in knowledge level, and (2) use as few samples as possible. To this end, we evaluate it on different educational datasets from three online educational platforms in experiments. They all consist of large-scale students’ practice logs on different subjects/domains for human-LLM comparison. In principle, it can be any academic and professional exam (e.g., SAT, Leetcode, and AP exams). Generally, in the above datasets, given $n$ test questions $Q = \{q_1, ..., q_n\}$ and $m$ examinees (LLM or real human being) $S = \{s_1, ..., s_m\}$, where each examinee answers some questions in $Q$ and gets the binary outcomes $Y = \{0, 1\}$ of correct ($y = 1$) or incorrect ($y = 0$). We can get the response data $D = \{(s_i, q_j, y_{ij}) | s_i \in S, q_j \in Q, y_{ij} \in Y\}$. The detailed two-stage evaluation process is described below. #### 3.1 Stage 1: Construction of Question Pools A diverse and high-quality question bank is the basis of adaptive testing (Wang & Vispoel, 1998). Before the formal educational assessment for LLM begins, we use the question set $Q$ in the above dataset to construct the question pool (Figure 2): Calibrating the characteristics/parameters of all the questions in $Q$. Thus, an Item Response Theory (IRT) model is fit to the large-scale response data $D$ to obtain such item parameter estimates to support computerized test administration. Previous work (Rodriguez et al., 2021) shows that the more sophisticated models are better for evaluating the NLP models, so we adopt the three-parameter logistic (3PL-IRT): \[ p_j(\theta_i) = \Pr(y_{ij} = 1|\theta_i) = c_j + (1 - c_j) \frac{1}{1 + \exp(-\alpha_j(\theta_i - \beta_j))}, \] where \( p_j(\theta_i) \) is the probability that an examinee \( i \) with ability \( \theta_i \) gives a correct response to question \( j \), and Eq.(2) defines three parameters (difficulty \( \beta_j \), discrimination \( \alpha_j \), and guessing factor \( c_j \)) for each question \( j \). With the response data \( D = \{(s_i, q_j, y_{ij})\}_{i,j} \), joint maximum likelihood estimation can be used to estimate all parameters: \[ \{\hat{\alpha}_j, \hat{\beta}_j, \hat{c}_j\}_{j=1}^n, \{\hat{\theta}_i\}_{i=1}^m = \arg \max_{\alpha, \beta, c, \theta} \prod_{D} p_j(\theta_i)^{y_{ij}}(1 - p_j(\theta_i))^{(1-y_{ij})}, \] where \( \{\hat{\alpha}_j, \hat{\beta}_j, \hat{c}_j\}_{j=1}^n \) are the estimated parameters of all questions, and \( \{\hat{\theta}_i\}_{i=1}^m \) are the real humans’ estimated ability (distribution), which can be used for subsequent LLMs comparisons with humans. Therefore, a dataset that can be used for comparing LLMs with humans needs to contain: (1) response data from real humans and (2) the question’s content. In traditional evaluation, to achieve this comparability, human groups and LLMs should answer the same question set or exam, and compare their scores or accuracy. Luckily, IRT only needs each examinee to answer a small part of the whole question pool and does not require them answering the same questions (Lord, 2012). **Question Characteristics.** In fact, *questions are not equally important for evaluating LLMs*. For example, the two LLMs A and B with an accuracy of 0.88 and 0.89 on one benchmark, their gap may not be as small as it seems. Because, (1) the massive easy samples/questions may overwhelm the difficult ones, so that B cannot show its strong performance over A; (2) or there are annotation errors/noise in the dataset, making the metric fail. IRT’s fundamental assumption is that questions are not equal (Lalor et al., 2016). Different questions usually have different characteristics (e.g., difficulty, discrimination, and guessing factors): (1) Difficulty \( \beta \): The examinee’s ability \( \theta \) and difficulty \( \beta \) have a unified scale. When \( \theta \) remains the same, the larger \( \beta \) is, the smaller the probability of a correct response. (2) Discrimination \( \alpha \): For the questions with high \( \alpha \), slight changes in ability may lead to large changes of the probability \( p(\theta) \), thus these items can better differentiate the examinees with similar abilities. (3) Guessing factor \( c \): The parameter \( c \in [0, 1] \) mainly reflects the probability of low-ability examinees answering the question correctly. As the level is higher, the effect of \( c \) becomes smaller. More illustrations and cases about question characteristics can be found in Appendix A.2. ### 3.2 Stage 2: Adaptive Testing After the construction of the question pool, the formal CAT starts in a question-LLM interactive mode. In this paper, LLM’s latent trait/ability can also be denoted by \( \theta \). For accurate and efficient assessment of its true ability \( \theta_0 \), CAT can sequentially select the best-fitting questions for LLM from the question pool \( Q \); then uses its responses for ability estimation. When the test stops, the final estimate is output as the result. To achieve such adaptability, it includes two components: (1) Ability Estimation using IRT and (2) Question Selection, and they work alternately at each test step: **(1) Ability Estimation using IRT.** For adaptive question selection during testing process, IRT is used to estimate LLM’s current ability \( \hat{\theta}^t \). Besides, we will illustrate the statistical properties of this estimate (Figure 3). Specifically, at test step \( t \in [1, 2, ..., T] \), given the LLM’s previous \( t \) responses \( S_t = \{(q_1, y_1), ..., (q_t, y_t)\} \), where \( \{q_j\}_{j=1}^{t-1} \subseteq Q \) are selected sequentially by the selection algorithm and \( y \) is the binary outcomes of correct or incorrect; LLM’s current ability can be estimated using maximum likelihood estimation (MLE): \[ \hat{\theta}^t = \arg \max_{\theta} \ln \prod_{S_t} p_j(\theta)^{y_j}(1 - p_j(\theta))^{(1-y_j)}, \] where \( p_j(\theta) \) represents the probability of the response \( (q_j, y_j) \) in IRT, which is defined in Eq.(2). It has been proved that when the sample size \( t \) is large, the distribution of estimator \( \hat{\theta}^t \) is approximately normal with mean \( \theta_0 \) and variance \( \frac{1}{tI(\theta_0)} \) (Ross, 2014; Efron & Hinkley, 1978) (\( I(\theta_0) \) is the Fisher information for \( \theta_0 \)): **Theorem 1** (Ross, 2014) Let examinee’s responses \( (q_1, y_1), ..., (q_t, y_t) \) of size \( t \) from a distribution for which the pdf or pmf is \( f(\theta) = p_j(\theta)^{y_j}(1 - p_j(\theta))^{(1-y_j)} \), with \( \theta \) the unknown ability parameter. Assume that the true ability is \( \theta_0 \), and the MLE result is \( \hat{\theta}^t \). Then the probability distribution of \( \hat{\theta}^t \) tends to a normal distribution: \[ \hat{\theta}^t \sim N \left( \theta_0, \frac{1}{tI(\theta_0)} \right) \] Obviously, it can be obtained that as the number of test items (\( t \)) or the Fisher information (\( I \)) increases, the variance \( \frac{1}{tI(\theta_0)} \) will continue to decrease. As shown in Figure 3, since the estimated value is asymptotically unbiased (i.e., its mean is equal to the true value \( \theta_0 \)), when its variance decreases, the distribution will keep “tightening”, thus reducing the uncertainty of the estimated ability \( \hat{\theta}^t \). Therefore, increasing \( t \) and the Fisher information are the two keys to improving the estimation accuracy. **(2) Question Selection.** In order to boost the efficiency of ability estimation and reduce the test length \( t \), it is crucial to minimize the variance (i.e., maximize \( I(\theta_0) \)). An important feature of \( I(\theta) \) is that the contribution of each question to the total information is additive: \( I(\theta) = \sum_{j=1}^{t} I_j(\theta) \), where \( I_j(\theta) \) is Fisher information for question \( j \). Therefore, the total amount of information for a test can be readily determined, and we can sequentially select \( T \) questions so that their Fisher information at \( \hat{\theta}^t, t = 1, 2, ..., T \), are as large as possible. More specifically, it retrieves the next question \( q_{t+1} \) from pool \( Q \) based on LLM’s current estimate \( \hat{\theta}^t \): \[ q_{t+1} = \arg \max_{j \in Q} I_j(\hat{\theta}^t), \] where \( I_j(\theta) = \frac{[p'_j(\theta)]^2}{p_j(\theta)[1-p_j(\theta)]} \) can be viewed as the informativeness of question \( j \). After receiving new response \( y_{t+1} \), IRT will update and estimate ability \( \hat{\theta}^{t+1} \) using Eq.(4). Compared with other complex selection algorithms (Chang & Ying, 1996; Bi et al., 2020; Ghosh & Lan, 2021; Zhuang et al., 2022), this Fisher information method is theoretically guaranteed and more interpretable. Put the specific IRT formula into \( I_j(\theta) \) and we can find that the Fisher method will select questions with (1) high discrimination and (2) difficulty close to the current ability estimate \( \hat{\theta}^t \) (Lord, 2012; Wang & Chang, 2011). Therefore, Fisher method not only considers question’s value (i.e., discrimination), but also the adaptability of question’s difficulty to the examinee’s ability. For example, when ChatGPT gets it right in step $t$, the algorithm will choose a more difficult question for it, and vice versa. This is why many high-ability GRE examinees in reality find that the test questions become more and more difficult. In Section 4, we compare the efficiency of this adaptive testing framework with the traditional evaluation method. ## 4 Diagnostic Reports for LLMs In this section, we first verify the evaluation efficiency of the proposed adaptive framework. Then, taking ChatGPT as an example, we compare the LLM with humans from three aspects Subject Knowledge (MOOC), Mathematical Reasoning (MATH) and Programming (CODE) (Section 4.2). Finally, we measure the latest 6 instruction-tuned LLMs and rank them by cognitive ability (Section 4.4). The code can be found in [https://anonymous.4open.science/r/CAT4LLM-D6C5](https://anonymous.4open.science/r/CAT4LLM-D6C5). ### Datasets. We choose three datasets to conduct fine-grained evaluation of LLM from three key areas: Subject Knowledge Level, Mathematical Reasoning Level, and Programming Level. These datasets are respectively known as MOOC, MATH, and CODE. (1) **Subject Knowledge Level (MOOC):** Massive Open Online Courses (MOOC) is currently one of the most popular online learning systems, and this dataset collects students’ answer records on various knowledge concepts in computer science (e.g., Computer System, Data Structure, and Machine Learning). (2) **Mathematical Reasoning Level (MATH):** The MATH dataset contains mathematical test items and logs of high school examinations. It covers students from 378 high schools in more than 130 cities. (3) **Programming Level (CODE):** The CODE dataset includes the code submissions of students from more than 120 universities. It is collected from an online programming platform. Due to anonymity principle, we omit the name of MATH and CODE datasets. Appendix A.1 shows the statistics of the datasets. ### Experimental Setup First, as mentioned in Section 3.1, all examinee response data in the three datasets should be used to estimate the question parameters (Eq. (3)) for constructing the question pools. It is worth mentioning that each dataset needs to be divided into a validation set to prevent overfitting. Second, the CAT system interacts with LLM for multiple rounds: LLM answers the questions selected by the selection algorithm, then IRT updates the ability estimate based on this response. Since the response from LLM is relatively lengthy, especially when answering fill-in-the-blank or short-answer questions, an automated method is not practical and an expert is required to judge its correctness. Such LLM-CAT-Expert interactions are shown in Appendix A.3. ### Compared Examinees. In this paper, in addition to the popular ChatGPT, we compare human student with 6 commercial instruction-tuned LLMs: **High/Mid-Ability Student** (It refers to the ability value of the Top 20%/50% of all students in the datasets), **ChatGPT** (OpenAI), **GPT4** (OpenAI), **Bard** (Google), **ERNIEBot** (Baidu), **QianWen** (Alibaba), and **Spark** (iFlytek). ### 4.1 Comparison of Different LLMs In addition to ChatGPT, we also use the above CAT method to compare the cognitive level of other models (Table 1). More importantly, in order to intuitively compare the abilities with humans, we also show the ability estimates of high-ability (Top 20%) and middle-ability (Top 50%) students, where CODE and MOOC are college students, and MATH is high school students. **GPT4 is the Best.** GPT4 is significantly higher than other LLMs in terms of mathematical reasoning, programming, and subject knowledge level. In particular, the subject level of GPT4 surpasses high-ability college students (Top 20%) in almost every knowledge concept. A large amount of knowledge can be “stored” with its massive training data and unprecedented model parameters, which is one of the reasons why other language models cannot beat it. **Each LLM has its own strengths.** For example, for programming level (CODE), GPT4 is good at Dynamic Programming and Math Problem, and ChatGPT is good at Search Problem. Although Spark’s average programming ability is lower than that of GPT4, using programming to solve --- 1<https://www.biendata.xyz/competition/chainedream_mooccube_task2/> Table 1: Estimated value ($\hat{\theta}$) for students and each model. The boldfaced indicates the highest ability value among these LLMs. The underline “…” indicates that the model surpasses mid-ability students (Top 50%). “*” indicates this model surpasses high-ability students (Top 20%). | Category | Bard | ChatGPT | GPT4 | ERNIEBOT | QianWen | Spark | 20% | 50% | |-------------------|------|---------|------|----------|---------|-------|-----|-----| | **MATH** | | | | | | | | | | Equations and Inequalities | 0.55 | 0.44 | *0.77* | 0.46 | 0.37 | *0.66* | 0.65 | 0.55 | | Probability and Statistics | 0.36 | 0.14 | *0.59* | 0.14 | 0.14 | 0.37 | 0.66 | 0.57 | | Function | 0.36 | 0.48 | 0.49 | 0.26 | 0.14 | *0.58* | 0.65 | 0.55 | | Permutation and Combination | 0.12 | 0.03 | *0.58* | 0.25 | 0.13 | 0.57 | 0.65 | 0.56 | | Geometry | 0.22 | 0.01 | 0.35 | *0.36* | 0.24 | 0.25 | 0.66 | 0.56 | | Average | 0.32 | 0.21 | 0.56 | 0.29 | 0.21 | 0.49 | 0.65 | 0.56 | | **Rank** | | | | | | | | | | Dynamic Programming | 0.34 | *0.75* | *0.83* | 0.41 | 0.42 | 0.40 | 0.70 | 0.63 | | Data Structure | 0.37 | 0.40 | *0.40* | 0.29 | 0.29 | 0.29 | 0.67 | 0.58 | | Math Problem | 0.46 | 0.60 | *0.84* | 0.39 | 0.39 | 0.60 | 0.66 | 0.58 | | Search | 0.23 | *0.73* | 0.51 | 0.41 | 0.41 | 0.41 | 0.70 | 0.61 | | Tree and Graph Theory | 0.00 | 0.38 | *0.49* | 0.27 | 0.34 | 0.37 | 0.63 | 0.54 | | Average | 0.28 | 0.57 | 0.61 | 0.35 | 0.37 | 0.40 | 0.67 | 0.59 | | **Rank** | | | | | | | | | | Programming Language | *0.80* | 0.57 | *0.78* | 0.26 | 0.47 | 0.57 | 0.73 | 0.63 | | Machine Learning | *0.78* | 0.67 | *0.99* | *0.77* | *0.88* | 0.25 | 0.55 | 0.48 | | Computer System | 0.68 | 0.70 | *0.82* | 0.49 | 0.38 | 0.48 | 0.74 | 0.66 | | Data Structure | 0.66 | 0.67 | 0.66 | 0.23 | 0.03 | 0.56 | 0.69 | 0.60 | | Algorithm | *1.00* | 0.79 | *0.77* | 0.34 | 0.46 | 0.43 | 0.69 | 0.60 | | Average | 0.78 | 0.68 | 0.80 | 0.42 | 0.44 | 0.46 | 0.68 | 0.60 | | **Rank** | | | | | | | | | | GPT4 | *0.77* | 0.46 | 0.37 | *0.66* | 0.65 | 0.55 | | | Mathematical problems is its forte. Therefore, although many LLMs have not announced the specific details of the data used, we have reason to infer that e.g., ChatGPT/GPT4 uses more coding-related data, and Spark uses more mathematics-related data in the training stage. Mathematical reasoning of LLM still has a long way to go. Mathematical reasoning ability is an important aspect for evaluating LLMs. Unfortunately, according to the estimated ability output by CAT, even the well-performing GPT4 and Spark models are only equivalent to mid-ability high school students. After all, the essence of LLM is still the sequence-to-sequence generative model based on probability instead of thinking and reasoning like humans. Transformer obviously is not enough to imitate human cognitive structure or process. Therefore, problem-solving based on cognition/reasoning (Liu et al., 2023; Ding et al., 2019; Lin et al., 2021) is still lacking in LLMs. Evaluation Efficiency. In addition to the theoretical guarantees, we use simulation experiments to verify the evaluation efficiency of the framework: Due to the unknown of the true ability $\theta_0$, we artificially generate 100 examinees’ $\theta_0$ and conduct the Simulation of Ability Estimation experiment on the MATH dataset using the mean square error $\mathbb{E}[(|\theta_t - \theta_0|^2)]$ between the ability estimate $\theta_t$ at each step and the true ability $\theta_0$ (Figure 4(a)): Fisher method can reduce the evaluation error quickly. Compared with using a fixed test set (randomly sampled from the data distribution), such adaptive evaluation method in this paper only needs 20% of the questions at most under the same estimation accuracy. Therefore, especially for tests that require human experts to score, this solution can greatly reduce labor costs and improve the efficiency of LLMs’ evaluation. As 20 is sufficient for the length of a typical adaptive test, we fix the max length to 20 and adaptively adjust the test length according to the informativeness metric (Wang et al., 2018). Therefore, rather than evaluating on hundreds of questions (OpenAI, 2023a; Huang et al., 2023), adaptive testing method can pick out truly valuable questions for evaluation, and only need a maximum of 20 questions. Adaptive Question Selection. To determine whether Computerized Adaptive Testing can adaptively select appropriate questions based on a model’s ability, we employ the Jaccard similarity coefficient to measure the similarity between the test questions answered by any two models. This is defined as $Jaccard(A, B) = |A \cap B| / |A \cup B|$, where $A$ and $B$ represent two different question sets. Figure 4(b) shows the Jaccard similarity of the test questions selected by CAT for each LLM (on MATH). Remarkably, almost all Jaccard values hover around 0.6, indicating that at least 20-30% of the questions are distinct, which is crucial for achieving the adaptivity of testing. In addition, the remaining 70-80% of the questions in these exams answered by the LLMs are the same, and Figure 4: (a) Simulation experiments of ability estimation using MSE: \( \mathbb{E}[||\hat{\theta}^t - \theta_0||^2] \). (b) The average Jaccard similarity coefficient of the selected questions for each LLM. (c) SE curves of ChatGPT and students with different guess and slip factors during adaptive testing. are valuable for evaluating all LLMs. Together, these two segments compose a test paper that can effectively evaluate the model and enhance the precision of ability assessment. **Adaptive Testing’s Reliability: ChatGPT is a “Careless Student”.** To confirm whether the adaptive testing framework used for humans can be used for LLMs, we study its reliability (SE curve \cite{Wang2018, Choi2017}). In the context of CAT, the SE value often refers to the standard error of ability estimate \( \theta^t \), which reflects the precision of an examinee’s ability estimate: \[ SE(\hat{\theta}^t) = \frac{1}{\sqrt{\sum_{j=1}^{t} I_j(\hat{\theta}^t)}}. \] A smaller SE indicates a more precise or reliable estimate \cite{Van der Linden & Glas2010, Wang2018}. Figure 4(c) shows the SE changes during the testing process of ChatGPT (blue) and 100 students (black). Although ChatGPT’s SE curve is not stable, it is faster and easier to converge than the student. To investigate the characteristics of ChatGPT SE curve and gain deeper insights on its similarity with humans, we add the guess and slip factors \cite{Zhuang2022b} to the student’s testing process: (1) Guess factor: even if examinee doesn’t master the question, there is a small chance of answering it correctly; (2) Slip factor: when encountering a simple one, there may be a small chance to answer it wrong. Thus, Guess10% means that the correctness label changes from 0 to 1 with 10%, and Slip10% means that the true label has a 10% probability of changing from 1 to 0. Interestingly, ChatGPT’s SE curve is very close to the student SE curve of Guess=10%, Slip=30% (red). From this, we can deduce that ChatGPT behaves like a “careless student” who is prone to slip (30%) and occasionally guesses the answers (10%). ### 4.2 ChatGPT vs Human In this part, we take ChatGPT as an example to evaluate it as a real human, using this adaptive testing framework. First, we compare ChatGPT and high-ability humans from three aspects, and provide a fine-grained diagnostic report. Next, we investigate the reliability of the CAT framework for LLM, and further explore the similarity between humans and LLM. Many other findings can be found in Appendix. **(1) Subject Knowledge Level:** Figure 5 shows the ability comparison between ChatGPT and real students. In Figure 5(a), the ability level of ChatGPT in the two concepts of Algorithm and Machine Learning is significantly higher than that of high-ability students. The programming language is the weakest part of ChatGPT, which obviously does not match his superior performance in coding ability as illustrated in \cite{Kasheti2023, Biswas2023}. To explore the reason, the right shows a very basic question case about Programming Language, but ChatGPT gets it wrong. Obviously, it is not proficient in grasping and understanding some basic concepts in programming languages. Combined with its amazing coding level on CODE (Figure 5(c)), we have reason to believe: ChatGPT is more like a “doer” rather than a “nerd”. **(2) Mathematical Reasoning Level:** From Figure 5(b), there is still a considerable gap between the mathematical reasoning ability of ChatGPT and that of humans. Surprisingly, during the test, ChatGPT incorrectly answers almost all questions about Probability and Statistics, Permutation and Combination, and Geometry. But its performance on Functions, Equations and Inequalities is relatively much better. Therefore, for such basic calculation problems with fixed problem-solving routines, ChatGPT is still competent. However, ChatGPT does not have the ability to solve the questions that require reasoning from real-world scenarios (e.g., Probability and Statistics, Permutation and Combination). (3) Programming Level: Although ChatGPT has shown its amazing coding capabilities both in the official reports and enormous user cases, it is not omnipotent nor good at all types. We use the CODE programming platform to conduct a fine-grained evaluation of ChatGPT’s programming ability (Figure 5c), including Dynamic Programming and Greedy Algorithm, Search, Math Problem, Data structure, and Tree and Graph Theory. The strongest are Search, Dynamic Programming and Greedy Algorithm, which can greatly surpass high-ability college students. However, Data Structure, and Tree and Graph Theory are its shortcomings. Therefore, next time you ask ChatGPT to write code, please try to avoid these types, and if you encounter problems about dynamic programming, please feel free to hand it over to ChatGPT. 5 CONCLUSION AND FURTHER WORKS More and more users are trying to explore LLM’s abilities in different aspects, and even ask it to do some things that “normal” NLP models cannot do, such as generating code, making PowerPoint, and writing emails. Thus, how to scientifically and efficiently evaluate its ability is more and more important. In this paper, we leverage an adaptive testing framework for assessing humans: Computerized Adaptive Testing (CAT). With its high efficiency, fewer questions are required under the same evaluation accuracy, which greatly reduces the labor cost and computation overhead. This paper is the initial attempt at evaluating LLMs using adaptive testing. Obviously, in terms of technology of the paper, it is very simple yet interpretable. Item Response Theory is unidimensional, while more complex cognitive science models, such as cognitive diagnosis, can be considered for a multidimensional and comprehensive assessment of the model. Furthermore, in addition to the ability estimation for LLM in this paper, some important concerns with LLMs, such as hallucinations, unfairness, security, and robustness, can also be estimated by designing corresponding selection algorithms to enhance assessment efficiency. REFERENCES Haoyang Bi, Haiping Ma, Zhenya Huang, Yu Yin, Qi Liu, Enhong Chen, Yu Su, and Shijin Wang. Quality meets diversity: A model-agnostic framework for computerized adaptive testing. In *2020 IEEE International Conference on Data Mining (ICDM)*, pp. 42–51. IEEE, 2020. Som Biswas. Role of chatgpt in computer programming.: Chatgpt in computer programming. *Mesopotamian Journal of Computer Science*, 2023:8–16, 2023. Hua-Hua Chang. Psychometrics behind computerized adaptive testing. *Psychometrika*, 80(1):1–20, 2015. Hua-Hua Chang and Zhiliang Ying. A global information approach to computerized adaptive testing. *Applied Psychological Measurement*, 20(3):213–229, 1996. Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Kaijie Zhu, Hao Chen, Linyi Yang, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, et al. A survey on evaluation of large language models. *arXiv preprint arXiv:2307.03109*, 2023. Seung W Choi, Matthew W Grady, and Barbara G Dodd. A new stopping rule for computerized adaptive testing. *Educational and Psychological Measurement*, 71(1):37–53, 2011. Ming Ding, Chang Zhou, Qibin Chen, Hongxia Yang, and Jie Tang. Cognitive graph for multi-hop reading comprehension at scale. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pp. 2694–2703, 2019. Chris Drummond and Nathalie Japkowicz. Warning: statistical benchmarking is addictive. kicking the habit in machine learning. *Journal of Experimental & Theoretical Artificial Intelligence*, 22(1):67–80, 2010. Bradley Efron and David V Hinkley. Assessing the accuracy of the maximum likelihood estimator: Observed versus expected fisher information. *Biometrika*, 65(3):457–483, 1978. Susan E Embretson and Steven P Reise. *Item response theory*. Psychology Press, 2013. Weibo Gao, Qi Liu, Zhenya Huang, Yu Yin, Haoyang Bi, Mu-Chun Wang, Jianhui Ma, Shijin Wang, and Yu Su. Rcd: Relation map driven cognitive diagnosis for intelligent education systems. In *Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval*, pp. 501–510, 2021. Aritra Ghosh and Andrew Lan. Bobcat: Bilevel optimization-based computerized adaptive testing. In *Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21*, pp. 2410–2417. International Joint Conferences on Artificial Intelligence Organization, 8 2021. Wynne Harlen. *The Assessment of Scientific Literacy in the OECD/PISA Project*, pp. 49–60. Springer Netherlands, Dordrecht, 2001. ISBN 978-0-306-47639-6. José Hernández-Orallo, Bao Sheng Loe, Lucy Cheke, Fernando Martínez-Plumed, and Seán Ó hÉigeartaigh. General intelligence disentangled via a generality metric for natural and artificial intelligence. *Scientific reports*, 11(1):22822, 2021. Giles Hooker, Matthew Finkelman, and Armin Schwartzman. Paradoxical results in multidimensional item response theory. *Psychometrika*, 74(3):419–442, 2009. Mark Hopkins and Jonathan May. Models of translation competitions. In *Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)*, pp. 1416–1424, 2013. Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuncheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models, 2023. Ali Kashefi and Tapan Mukerji. Chatgpt for programming numerical methods. *arXiv preprint arXiv:2303.12093*, 2023.
BifeBRhikU
Inconsistent Salient Weight Methodology between PTQ and QAT: The absence of a consistent methodology for salient weight protection between PTQ and QAT is concerning. While the effectiveness of using Hessian criteria for identifying salient weights in PTQ is demonstrated through performance comparisons, the rationale for using magnitude criteria to identify salient weights in QAT seems to be missing. Understanding the disparity in the approach to salient weight protection across PTQ and QAT is crucial for a holistic appreciation of the proposed method.
PB-LLM: Partially Binarized Large Language Models Zhihang Yuan Houmo AI Zhen Dong UC Berkeley Abstract This paper explores network binarization, a radical form of quantization, compressing model weights to a single bit, specifically for Large Language Models (LLMs) compression. Due to previous binarization methods collapsing LLMs, we propose a novel approach, Partially-Binarized LLM (PB-LLM), which can achieve extreme low-bit quantization while maintaining the linguistic reasoning capacity of quantized LLMs. Specifically, our exploration first uncovers the ineffectiveness of naïve applications of existing binarization algorithms and highlights the imperative role of salient weights in achieving low-bit quantization. Thus, PB-LLM filters a small ratio of salient weights during binarization, allocating them to higher-bit storage, i.e., partially-binarization. PB-LLM is extended to recover the capacities of quantized LLMs, by analyzing from the perspective of post-training quantization (PTQ) and quantization-aware training (QAT). Under PTQ, combining the concepts from GPTQ, we reconstruct the binarized weight matrix guided by the Hessian matrix and successfully recover the reasoning capacity of PB-LLM in low-bit. Under QAT, we freeze the salient weights during training, explore the derivation of optimal scaling factors crucial for minimizing the quantization error, and propose a scaling mechanism based on this derived scaling strategy for residual binarized weights. Those explorations and the developed methodologies significantly contribute to rejuvenating the performance of low-bit quantized LLMs and present substantial advancements in the field of network binarization for LLMs. The code is available at PB-LLM. 1 Introduction Recently, large language models (LLMs) have gained significant traction in artificial intelligence. It can be attributed to the success of models such as ChatGPT [Brown et al., 2020, Ouyang et al., 2022]. Following its lead, other LLMs such as OPT [Zhang et al., 2022], BLOOM [Scao et al., 2022], and LLaMA [Touvron et al., 2023] have emerged, proving that an increase in model size typically results in enhanced capabilities. As a result, models with tens to hundreds of billions of parameters have become the norm. However, their vast size poses considerable deployment challenges on memory-constrained devices. A model such as the LLAMA-65B (with 65 billion parameters) requires at least 130GB of memory for inference - a number that often exceeds the capacity of a single GPU or server. Many methods have been proposed to reduce the memory consumption of LLMs [Yuan et al., 2024]. Those methods can be categorized into weight quantization [Dettmers et al., 2022], network pruning [Frantar and Alistarh, 2023], and low-rank factorization [Zhang et al., 2023]. Among these compression paradigms, weight quantization is particularly prominent and widely adopted for LLMs. Since it preserves the original model architecture and leverages well-trained LLMs’ full-precision checkpoints, the compression process is greatly simplified [Zhu et al., 2023]. However, state-of-the-art LLM quantization methods show a marked decline in quality beyond 4 bits [Liu et al., 2023a]. More aggressive compression methods are required to push the LLM quantization into the lower bit range. The network binarization technique stands out, reducing the bit-width of weights to just one bit [Helwegen et al., 2019, Rusci et al., 2020, Qin et al., 2020a; 2023]. The binarized models take little storage and memory, and accelerate the inference by efficient bitwise operations. Compared to other aggressive compression technologies like high-sparsity pruning, network binarization has potent topological generics, as it only applies to parameters. Binarization is widely studied in academic research as a standalone compression technique, rather than simply a 1-bit specialization of quantization. Some SoTA binarization algorithms have even achieved full-precision performance on large-scale tasks, e.g., ReActNet [Liu et al., 2020a] for ImageNet classification [Deng et al., 2009]. It is theoretically possible to significantly lower the LLM quantization if we generalize the idea of binarizing the weights of LLMs. In this paper, we explore network binarization specifically for LLM quantization and propose Partially-binarized LLMs (abbreviated as PB–LLM). This methodology aims to achieve extreme quantization to the lowest possible bit, while maintaining the language reasoning capacity inherent in LLMs. The explorations indicate that simple adaptations of existing binarization algorithms do not work well for LLM quantization. As a result of this realization, attention is directed towards the salient-weight property of LLM quantization. In order to achieve the desired extreme low-bit quantization, salient weights must be fully exploited. We investigate the salient weights in aspects of their detection criteria and granularity, as well as the storage costs. Then, we propose the partially binarized matrix, storing the salient weights in higher bits. After establishing the foundation of PB–LLM, the exploration extends to regain the lost reasoning capacity of the quantized LLMs, under the frameworks of post-training quantization (PTQ) and quantization-aware training (QAT). In the view of PTQ, inspired by the concepts of GPTQ [Frantar et al., 2022], we reconstruct the PB–LLM matrix guided by the Hessian matrix and successfully recover the reasoning capacity of PB–LLM in low-bit. In the view of QAT, salient weights are frozen throughout the binarization process for efficient training. In addition, from the perspective of quantization error minimization, we explore how binarized LLMs should be scaled based on the ideal scaling factor. We scale the binarized weight based on the derived scaling strategy shown in Fig. 1a. Low-bit quantized LLMs can significantly improve their performance with such explorations. Benefited from explorations of PTQ and QAT, PB–LLM can efficiently obtain an extremely low-bit LLM with comparable reasoning capacity (see Fig. 1b). The methodologies applied and the insights gained within this study stand to contribute substantially to the advancement of knowledge and development in the field of network binarization for LLMs. 2 RELATED WORK 2.1 NETWORK BINARIZATION. Binarization uses the sign function to binarize weights and activations to ±1. To eliminate the vanishing gradient issue caused by the sign function in the binarization, the straight-through estimator (STE) [Bengio et al., 2013] is utilized for the network backpropagation. Based on this archetype, copious studies contribute to improving the performance of BNNs. Binarization techniques can be broadly classified into three categories: the enhancement of training objectives, the reduction of gradient mismatch, and the minimization of quantization errors [Qin et al., 2020b; 2023, Yuan and Agaian, 2023]. To illustrate: Gradient Mismatch: Liu et al. [2020b] introduce double residual connections paired with full-precision downsampling layers. This approach addresses the gradient vanishing problem that arises due to binarization. Training Objectives: Martinez et al. [2020], Shang et al. [2022a;b; 2021] focus on optimizing the loss function during training. They suggest aligning the spatial attention maps derived from both binary and real-valued convolutions. Quantization Error Minimization: Rastegari et al. [2016] identify that the disparity in quantization between full-precision and binarized weights can impede the representational abilities of BNNs. As a solution, they introduce a scaling factor—determined by the L1 norm—for both weights and activation functions. While binarization has proven successful in computer vision, its exploration in natural language processing remains limited. Existing methods [Bai et al., 2020, Qin et al., 2022, Liu et al., 2022; 2023b] primarily target smaller language models (e.g., BERT-base [Devlin et al., 2018] with 110M parameters) potentially hindering their generalization to larger ones (e.g., LLAMA-7B [Touvron et al., 2023] with 7B parameters). We investigate binarization for LLMs comprehensively in this paper and propose PB−LLM, which is an attempt to compress LLMs using binarization. 2.2 Large Language Model Quantization. Quantization, a prominent method in model compression, addresses the storage and computational overhead of deep learning models. Recent research efforts successfully apply quantization to compress Large Language Models (LLMs), including Quantization-Aware Training (QAT) and Post-Training Quantization (PTQ). In the domain of QAT, innovative strategies like LLM-QAT [Liu et al., 2023a] address challenges in acquiring training data for LLMs by leveraging pre-trained models for data-free distillation. Additionally, techniques such as QLORA [Dettmers et al., 2023a] focus on parameter-efficient fine-tuning (PEFT), expediting model compression and inference acceleration. In PTQ, approaches range from quantizing only the weights of LLMs to jointly quantizing both weights and activations. Methods like GPTQ [Frantar et al., 2022] and QuIP [Chee et al., 2023] optimize matrix multiplications and propose novel layer-wise quantization techniques achieving high compression rates. SqueezeLLM [Kim et al., 2023] and SpQR [Dettmers et al., 2023b] identify weights that lead to particularly large quantization errors and subsequently storing them with higher precision to mitigate the accuracy degradation caused by weight quantization. AWQ [Lin et al., 2023] and OWQ [Lee et al., 2023] contend that when quantizing weights, it is crucial to account for the impact of activation outliers on weights. Norm Tweaking [Li et al., 2023] addresses the issue of activation value deviation by training LayerNorm. For activation quantization, ZeroQuant [Yao et al., 2022] proposes a fine-grained quantization method that can be applied to both weights and activations. Methods like SmoothQuant [Xiao et al., 2022] and Outlier Suppression [Wei et al., 2022; 2023] shift the quantization challenge from activations to weights by proposing a mathematically equivalent per-channel scaling transformation. Omni-Quant [Shao et al., 2023] further enhances performance by training the quantization parameters. RPTQ [Yuan et al., 2023a] proposed proposes performance improvement through grouped quantization after clustering similar channels. In this paper, our primary focus lies in the binarization of weights exclusively, employing both PTQ and QAT methodologies. 3 Partially Binarizing Large Language Models (PB−LLM) In this section, we elaborate on the methodology of Partially Binarizing Large Language Models, named PB−LLM. To begin, a review of the foundational framework of binarized neural networks is presented, showcasing its applicability and limitation to LLM quantization. Subsequently, a novel format for the quantized matrix is formulated, specifically tailored for the binarization of LLMs. Taking advantage of the proposed partially-binarized weight matrix, we delve into its potential in the realms of post-training quantization and training-aware training for LLMs, to break the trade-off between bit-width and performance. It is crucial to note that, due to constraints in computational resources, the methodology exploration predominantly utilizes OPT-1.3B [Zhang et al., 2022] to perform the majority of experiments. Given the space constraints, this section primarily focuses on key aspects of the methodology. For detailed discussions, exact result values, and specific implementation details in codes, readers are referred to the supplemental materials. 3.1 Preliminary: Network Binarization To begin with, we briefly review the general concept of network binarization and binarized neural networks (BNNs) in [Courbariaux et al., 2016, Hubara et al., 2016]. As most optimizable quantized structures of LLMs are linear layers (see Fig. 1a) in LLMs, we use a one-layer Perceptron to show the training and inference processes of the BNN. The one-layer neural network is defined as \( f(x) = (W)(a) \), where \( a \in \mathbb{R}^{d_i} \) is the input activation and \( W : \mathbb{R}^{d_i} \rightarrow \mathbb{R}^{d_o} \) stands for the weight matrix, with \( d_i \) and \( d_o \) representing the sizes of the input and output of the layer, respectively. The goal of network binarization is to represent floating-point (FP) weights, denoted as \( W_F \), and/or FP activations \( a_F \) as 1-bit (i.e., \( \pm 1 \)) values [Qin et al., 2020b]. Networks utilizing this representation are referred to as BNNs. BNNs diverge from FP neural networks in their forward operations and in the approximation of backward gradients. In the forward propagation, the sign function is used for binarizing FP values of weights: \[ \text{Forward: } \text{sign}(x) = \begin{cases} +1 & x \geq 0 \\ -1 & x < 0. \end{cases} \] Specifically, in the training process of binarized network, the BNN maintains FP latent weights \( W_F \) for gradient updates, and the updated weight matrix \( W_F \) is binarized into the binary weight matrix \( W_B \) via the binarize function \( \text{sign}(\cdot) \), i.e., \( W_B = \text{sign}(W_F) \). Then the intermediate activation map (full-precision) of this layer is produced by \( A_{F,o} = W_B A_{F,i} \). For inference efficiency, BNNs with 1-bit weights significantly reduce the memory cost of inference. Theoretically, BNNs can binarize both weights and activations to 1-bit, providing a 32x compression in memory cost and a 64x acceleration in inference speed, by replacing FP multiplications in conventional floating-point networks with Xnor-Bitcount operations. However, recent studies highlight that the weights of LLMs as the main contributor to memory overhead [Kim et al., 2023], and thus we primarily aim to curtail memory costs. Therefore, in this pivotal exploration of binarized LLMs, our attention is specifically centered on weight binarization, foregoing the simultaneous binarization of weights and activations. In the backward propagation, the main challenge is that the pervasive \( \text{sign} \) functions are theoretically non-differentiable, and thus extremely destroy the gradient chain in the backward propagation. To address this problem, researchers widely exploit the straight-through estimator (STE) [Bengio et al., 2013] to numerically approximate the derivative of the whole BNN [Qin et al., 2020b], i.e., \[ \frac{\partial L}{\partial x} = \begin{cases} \frac{\partial L}{\partial \text{sign}(x)} & |x| \leq 1 \\ 0 & |x| > 1, \end{cases} \] which makes the optimization of BNN accessible. We first investigate the possibility of implementing binarization to LLM quantization. Specifically, following the binarization benchmark in BiBench [Qin et al., 2023], we generalize some representative binarization methods into LLM quantization scenarios. BNN [Hubara et al., 2016], XNOR [Rastegari et al., 2016], Bi-Real [Liu et al., 2020b], ReCu [Xu et al., 2021a] and FDA [Xu et al., 2021b] are re-implemented to quantize LLMs, particularly to OPT [Zhang et al., 2022]. Training details are illustrated in Sec. 4. The results evaluated on seven zero-shot common sense reasoning tasks are shown in Fig. 2. We can see that the LLMs binarized via the existing popular binarization algorithms perform worse than random guesses, showing that the existing binarization methods are not suitable for LLM binarization. 3.2 Partially Binarized Weight Matrix In the low-bit quantization of Transformers, a significant challenge is managing the salient weights, as they can unnecessarily extend the quantization range [Kovaleva et al., 2021]. Several outlier-aware... LLM compression methods have been explored to tackle this issue [Dettmers et al., 2022, Wei et al., 2022, Kim et al., 2023, Lin et al., 2023, Yuan et al., 2023b]. Notably, SqueezeLLM [Kim et al., 2023] provides a generalized methodology for handling outliers in weight values during 4-bit LLM post-training quantization. Concurrently, AWQ [Lin et al., 2023] demonstrates that preserving only 1% of significant weights can benefit 4-bit LLM quantization. Motivated by existing research, this study also seeks to optimize the treatment of salient weights while binarizing most of weights. We present Partially-Binarized LLMs (PB-LLM), a method involving the selective binarization of the LLMs’ weight matrix, wherein a minor fraction of weights is kept in high bits for enhanced language capacity. ### 3.2.1 Salient Weight: Criteria, Granularity, and Cost Beyond the most straightforward method of choosing salient weights—selecting based on magnitude element-wise—we conduct a thorough investigation into salient weight detection from two perspectives: criteria and granularity. For criteria, we compare Magnitude- and Hessian-based methods, and for granularity, we explore both element-wise and column-wise approaches. In addition, we discuss the cost of storing matrix weights in a mixed-precision manner. **Criteria: Magnitude vs. Hessian.** Beyond the identification of salient weights through magnitude, alternative criteria have also been examined. The Hessian metric emerges as a crucial factor in LLM quantization, as elucidated in [Dong et al., 2019, Frantar et al., 2022, Frantar and Alistarh, 2023], particularly in relation to post-training quantization for LLMs (details regarding the Hessian criteria for PTQ can be found in Sec. 3.3). However, we observe that the selection of salient weights, whether by magnitude or Hessian, does not significantly impact the efficacy of LLM partial binarization, especially under the framework of QAT. Consequently, magnitude is elected as the preferred criterion for the identification of salient weights in both PTQ and QAT, primarily due to its simplicity and efficacy in distinguishing critical weight components. **Granularity: Element-wise vs. Column-wise.** Our investigations reveal that adopting a column-wise approach for selecting salient weights has the potential to impair the performance of binarization. Visualization of the salient weights’ distribution within the matrix, as depicted in Fig. 3 (where the white dots represent the filtered salient weights), disclosed a random and uniform scattering of these weights. Given the absence of any discernable column-wise pattern in the distribution of salient weights, a column-wise filtration method is deemed unsuitable. This scattered and uniform distribution necessitates an element-wise approach for effective filtration in the binarization process. **Salient Weight Storing Cost.** The additional overhead for storing the salient weights is acceptable. The overall bit number, $N_{bit}$, must adhere to the following condition: $$N_{bit} \leq 1 \times r_{binary} + N_{salient-bit} \times (1 - r_{binary}) + 1,$$ (3) Here, $r_{binary}$ denotes the ratio of the binarized weights, $N_{salient-bit}$ represents the number of bits allocated for storing salient weights (e.g., 8 bits), and the additional 1 bit is allocated for using the bitmap mechanism [Chan and Ioannidis, 1998] for index saving. It’s important to note that employing bitmap for index storage is not the most efficient method and can be optimized further using sparse matrix storage methods such as Compressed Sparse Row (CSR) or Compressed Sparse Column (CSC) [Borštnik et al., 2014]; hence the use of $\leq$ instead of $=$ in Eq. 3. The relationship between the ratio of salient weights and the overall bit number is illustrated in Fig. 4, depicting that a lower ratio corresponds to a reduced overall bit number. For example, retaining 10% of weights in 8 bits and binarizing the remaining 90% equates to, at most, a 2.7-bit quantization. Table 1: Perplexity of C4 on OPT-1.3B quantized with RTN (without GPTQ) and PB-GPTQ. Magnitude criteria or Hessian criteria is used for detecting salient weights. | Salient Fraction | 50% | 20% | 10% | 5% | |------------------|-------|-------|-------|-------| | RTN Magnitude | 24.5675 | 5892.0898 | 4889.0385 | 8023.1132 | | RTN Hessian | 20.2512 | 2109.8522 | 7508.7788 | 6173.1611 | | PB-GPTQ Magnitude| 18.3674 | 46.4093 | 895.0322 | 2880.6157 | | PB-GPTQ Hessian | 17.7567 | 42.1157 | 165.6767 | 528.4877 | | PB-GPTQ Magnitude g=128 | 18.0293 | 57.2164 | 1230.8537 | 2662.7114 | | PB-GPTQ Hessian g=128 | 17.6000 | 45.9811 | 157.8825 | 646.3616 | ### 3.3 POST-TRAINING QUANTIZATION FOR PB-LLMs After defining the partially-binarized matrix format, the next step is to recover the performance (i.e., the reasoning capacity in the literature of LLMs) of the quantized PB-LLM. In this section, we explore the weight binarization with post-training quantization (PTQ) methods. PTQ methods hold a prominent position in the realm of quantization techniques for LLMs due to their ease of implementation. They enable direct quantization of pre-trained LLMs without the need for a training dataset and additional training overhead. Therefore, we first explore the weight binarization within the PTQ framework. GPTQ [Frantar et al., 2022] is the most efficient and effective method for weight quantization [Zhu et al., 2023], capable of quantizing LLMs to 4-bit or even 2-bit. Therefore, we generalize the idea of GPTQ to the partial-binarization setting. Specifically, GPTQ quantizes the weights in LLM layer-by-layer to minimize the layer-wise quantization error: $$\arg \min_W ||WX - \hat{W}X||_2^2$$ GPTQ quantizes a weight $w_q$ to $\hat{w}_q$, calculates the compensation $\delta_{-q}$ for remaining weights $w_{-q}$, and then applies the compensation factor to the remaining weights: $$\delta_{-q} = \frac{w_q - \hat{w}_q}{[H^{-1}]_{qq}} \cdot (H^{-1})_{:,q}, \quad w_{-q} := w_{-q} + \delta_{-q},$$ where $H$ is the Hessian matrix of the layer-wise quantization error with respect to the weights and $w_q$ is the $q$-th value in flattened weight matrix $W$. In GPTQ, weights are quantized iteratively and the remaining weights are updated until all weights have been quantized. We propose to use GPTQ to iteratively binarize the un-salient weights and quantize the salient weights to higher bit, and then apply the compensation to the remaining weights. Specifically, we first detect the salient weights $W^{sal}$ and un-salient (to-be-binarized) weights $W^{unsal}$ in the weight matrix $W = W^{sal} + W^{unsal}$. Drawing inspiration from SparseGPT [Frantar and Alistarh, 2023], we calculate the saliency metric, represented as $v_i = w_i^2 / [H^{-1}]_{ii}$, for the purpose of detecting salient weights using Hessian criterion. The un-salient weights will be binarized to $W^{unsal}$, and the salient weights will be quantized to higher bit $W^{sal}$. We use asymmetric per-channel quantization for both salient and un-salient weights. For un-salient weight, we use the per-channel mean as zero point and calculate the optimal scaling factor $\alpha$ for the un-salient weights using the method in Sec. 3.4.2. We use MinMax metric to calibrate the scaling factor and zero point for salient weights. In the quantization process, we iteratively quantize the columns in the weight matrix $W$. For each column, we binarize the un-salient weights and quantize the salient weights, and then calculate the compensation for remaining weights, and then apply the compensation factor to the remaining columns of weights. This process is repeated until all the weights are quantized. The proposed method is denoted as PB-GPTQ. We also explore the fine-grained PB-GPTQ, which quantizes the weights in a group-wise manner. Specifically, the weight matrix is split into several groups, each group contains $g$ columns. In each group, we detect the salient weights and un-salient weights, and then calibrate to set the scaling factor and zero point using the weights in this group. The results are listed in Tab. 1. PB-GPTQ is significantly better than RTN. We note that the Hessian-based PB-GPTQ exhibits a superior performance compared to the Magnitude criterion PB-GPTQ. The group-wise PB-GPTQ performs better or worse than the non-group-wise PB-GPTQ, but the difference is not significant. Our analysis suggests that the disparity in scaling factors is not the primary determinant of binarization performance; hence, the introduction of group-wise methodology does not yield an enhancement in binarization performance. Subsequently, our next endeavor will involve the application of QAT to reduce the error introduced by weight binarization. 3.4 Quantization-aware Training for PB-LLMs In order to further enhance the reasoning capacity of the Partially-Binarized Large Language Models (PB-LLM), we extend our exploration by employing Quantization-aware Training (QAT) to meticulously train the quantized models. Because LLM training is difficult, we desire that PB-LLM training could be as efficient as possible. To realize efficient training for PB-LLM, we propose the Salient Weights Frozen and Optimal Scaling Factor for Binary Weights, targeting the salient weights and binarized weights, respectively. 3.4.1 Salient Weights Frozen To leverage the value of pretrained weights, we propose freezing the salient weights, determined by weight magnitude, prior to the weight binarization process. As illustrated in Fig. 1a, we initially filter out a number of weights from a pre-trained weight matrix—e.g., 2% by magnitude—at the beginning of quantization-aware training, maintaining their fixed state throughout the training process. Examination of training efficiency (refer to Fig. 5) suggests that these salient weights play a crucial role in LLM capacity. Maintaining the high bit representation of certain weights, aids in the training of quantized LLMs and reduces their optimization difficulty. 3.4.2 Optimal Scaling Factor for Binary Weights. AWQ [Lin et al., 2023] enhances the weight-only quantization method for LLMs by optimizing scaling factors to mitigate the quantization error of quantized weights. Specifically, AWQ demonstrates that searching for empirically optimal scaling factors proves to be an effective strategy for reducing quantization errors and recovering the performance of the quantized models. Fortunately, in the context of LLM binarization, we have a better choice for scaling the binarized weights. There’s no need to search for optimal scaling factors as they can be analytically derived. Specifically, we apply a column-wise scaling factor to binarized weights to reduce the binarization error, i.e., enforcing \( w_F = \alpha \bar{w}_B \). The optimal values of scaling factor \( \alpha \) for the \( \bar{w}_B \in \{-1, 1\} \) can be calculated by minimizing the L2 error: \[ \alpha^* = \arg \min_{\alpha \in \mathbb{R}_+} J(\alpha), \text{ in which } J(\alpha) = \| w_F - \alpha \bar{w}_B \|_2^2 \] (6) Following XNOR-Net [Rastegari et al., 2016], by expanding the below equation, we have \[ J(\alpha) = \alpha^2 \bar{w}_B^T \bar{w}_B - 2\alpha w_F^T \bar{w}_B + w_F^T w_F \] (7) For the vector with \( w_F \in \mathbb{R}^n \) we follow the traditional methods of binarizing weights [Hubara et al., 2016] by taking the sign of real-valued weights: \[ \bar{w}_B = \text{sign}(w_F) = \begin{cases} +1, & w_F^i \geq 0; \\ -1, & w_F^i < 0. \end{cases} \] (8) In that case, \( \bar{w}_B^T \bar{w}_B = n_{w_F} \), where \( n_{w_F} \) is number of elements in \( w_F \), and \( \alpha^* \) can be solved as: \[ \alpha^* = \frac{w_F^T \bar{w}_B}{n_{w_F}} = \frac{\| w_F \|_1}{n_{w_F}} \] (9) A counterintuitive outcome emerges from the incorporation of salient-frozen and optimal-scaling mechanisms: directly deploying those two mechanisms to pre-trained LLM even without any retraining or fine-tuning, still results in commendable performance. For instance, applying these techniques to OPT-1.3B with 50% salient weights (see Fig. 6) reveals that the partially-binarized OPT-1.3B retains a small amount of language capacity, corroborating the importance of a small number of salient weights in LLM quantization. Consequently, implementing just these two techniques—Outlier Frozen and Optimal Scaling Factor for Binary Weights—on pre-trained LLMs serves as an efficient starting point for training PB-LLM. Figure 7: QAT training results with 30% salient weights PB–LLM (upper two lines): As fine-tuning epochs increase, quantized models swiftly regain their reasoning capacities, demonstrating the resilience and adaptability of PB–LLM in sustaining cognitive functionalities within models, despite substantial quantization; QAT training results with 5% salient weights PB–LLM (bottom two lines): Existing LLM QAT methods exhibit an absolute failure when subjected to extremely-low bit conditions. In contrast, PB–LLM triumphs in restoring the reasoning capacities of low-bit quantized LLMs. This underlines the efficacy of PB–LLM in balancing quantization and performance, preserving the essential reasoning abilities of LLMs even under rigorous bit reduction. Both of the above-proposed mechanisms are very effective when used during quantization-aware training of PB–LLM. The consequential outcomes are delineated in Figs. 7a-7p. Observations from the presented results elucidate that optimizing using the partially-binarized quantization format is notably more straightforward compared to single-bit quantization. This empirical evidence corroborates the discussion regarding the rapid convergence property found in Sec. 3.4.1, highlighting the efficacy and adaptability of our proposed methodology in optimizing LLMs within the constraints of partial binarization. From the perspective of QAT, PB–LLM emerges as more efficient in training compared to existing LLM QAT methods. For instance, while models like LLM-QAT [Liu et al., 2023a] necessitate up to 100K iterations for adequate training, PB–LLM remarkably achieves recovery of the performance of quantized LLMs in merely around 1-10K iterations. 4 EXPERIMENTS Besides the exploration with OPT-1.3B in Sec. 3, we assess the effectiveness of PB–LLM by conducting experiments on LLaMA-7B [Touvron et al., 2023] and presenting results on various tasks. 4.1 EXPERIMENTAL SETUP Dataset. In this study, the PB–LLM is trained using the RedPajama-simple-1B dataset, as the dataset for LLaMA training is not openly accessible. This dataset, RedPajama-1T, is structured to closely resemble the LLaMa paper and serves as a transparent, open-source alternative to LLM training. Table 2: Zero-shot performance on Common Sense Reasoning tasks within a 4-bit setting. Reported results of previous works are documented in their papers. PB-LLM 30% denotes the preservation of 30% salient weights, and PB-LLM 10% implies the preservation of 10% salient weights. | Method | BoolQ | PIQA | HellaSwag | WinoGrande | ARC-E | ARC-C | OBQA | Avg | |-----------------|-------|------|-----------|------------|-------|-------|------|-----| | FP LLaMA-7B | 76.8 | 79.3 | 76.1 | 70.0 | 73.0 | 48.0 | 57.6 | 68.7| | RTN | 71.2 | 77.3 | 72.7 | 66.9 | 68.8 | 46.4 | 52.8 | 65.2| | SmoothQuant | 67.7 | 76.0 | 69.4 | 66.7 | 66.9 | 43.0 | 50.6 | 63.0| | LLM-QAT | 75.5 | 78.3 | 74.0 | 69.0 | 70.0 | 45.0 | 55.4 | 66.6| | PB-GPTQ 10% | 62.3 | 55.9 | 27.7 | 49.3 | 29.3 | 20.1 | 10.6 | 36.5| | PB-GPTQ 30% | 73.5 | 74.9 | 47.5 | 64.9 | 61.3 | 32.4 | 25.2 | 54.2| | PB-LLM 10% | 68.9 | 67.8 | 68.1 | 67.4 | 58.7 | 42.9 | 50.6 | 60.6| | PB-LLM 30% | 75.7 | 78.0 | 74.3 | 69.7 | 69.0 | 45.6 | 55.8 | 66.9| dataset. It amalgamates data from diverse sources including Commoncrawl, C4, GitHub, Wikipedia, Gutenberg Books3, ArXiv, and Stackexchange. RedPajama-simple-1B, representing a 0.1% subset of RedPajama-1T, is substantially smaller than the typical datasets used for training other LLMs, making it a convenient choice for our experiments. Training Details. In the training process of our quantized network, we commence with a pre-trained model for initialization. The optimization of the model is facilitated through the AdamW optimizer [Loshchilov and Hutter, 2017], applied with zero weight decay. We assign a batch size of 1 to each GPU and implement a learning rate of 2e-5, adhering to a cosine learning rate decay strategy. We only fine-tune our PB-LLM for 10K iterations. Evaluated Tasks. To eliminate the variance of evaluated performance, we evaluate the binarized LLMs on seven zero-shot common sense reasoning tasks, i.e., BoolQ [Clark et al., 2019], PIQA [Bisk et al., 2020], HellaSwag [Zellers et al., 2019], WinoGrande [Sakaguchi et al., 2021], ARC-Easy, ARC-Challenge [Clark et al., 2018], OBQA [Mihaylov et al., 2018]. We also evaluated the quantized models’ perplexity scores on WikiText2 [Merity et al., 2016] and C4 [Raffel et al., 2020]. 4.2 Results on LLaMA Experiments were conducted on LLaMA-7B. The results of employing PB-GPTQ and PB-LLM are illustrated in Tabs. 2 and 3. When employing PTQ, PB-GPTQ exhibited commendable performance, particularly when the salient weight exceeded 30%. Nevertheless, a noteworthy decline in the performance of the quantized network was observed when the salient weight was reduced to 10%. On the other hand, employing QAT resulted in a notable improvement in the performance. A comparison within a 4-bit quantization setting between PB-LLM 30% and LLM-QAT in Tab. 2 reveals superior performance by our method. It is notable that PB-LLM is only fine-tuned for 10K iterations, whereas LLM-QAT underwent 100K iterations of training, showing its fast convergence property (refer to Sec. 3.4). The results under PB-LLM 10% represent the outcomes of PB-LLM where 10% of salient weights are preserved. This demonstrates the potential for advancing LLM quantization towards a fully 1-bit state. Table 3: Perplexity of C4, wikitext2 and PTB on LLaMA-7b quantized with PTQ methods. | Method | C4 | WIKI | PTB | |-----------------|-------|-------|-------| | FP | 7.3435| 5.6770| 41.1509| | GPTQ 4b | 8.6977| 8.1368| 57.9951| | SparseGPT 50% | 15.5949| 12.829483| 505.1396| | PB-GPTQ 50% | 8.1466| 6.3089| 54.8674| | PB-GPTQ 20% | 20.6057| 17.1929| 280.4353| | PB-GPTQ 10% | 72.1115| 85.7838| 708.4120| | PB-GPTQ 5% | 401.6475| 619.1054| 1687.1815| 5 Conclusion In conclusion, this work is the first to implement network binarization for LLM quantification, introducing the novel Partially-binarized LLM (PB-LLM) methodology. This approach is meticulously designed to maintain linguistic reasoning capabilities of LLMs, even under extreme low-bit quantization. The research unearthed the significant role of salient weights in achieving extreme quantization and proposed innovative strategies like optimal scaling for effective binarization. This framework is extended to recover the capacities of quantized LLMs, by analyzing from the perspective of post-training quantization (PTQ) and quantization-aware training (QAT). The methodology is a significant stride in the realm of network binarization for LLMs. REFERENCES Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. *Advances in Neural Information Processing Systems*, 35:27730–27744, 2022. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. *arXiv preprint arXiv:2205.01068*, 2022. Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilić, Daniel Hesslow, Roman Castagné, Alexandra Sasha Lucioni, François Yvon, Matthias Gallé, et al. Bloom: A 176b-parameter open-access multilingual language model. *arXiv preprint arXiv:2211.05100*, 2022. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. *arXiv preprint arXiv:2302.13971*, 2023. Zhihang Yuan, Yuzhang Shang, Yang Zhou, Zhen Dong, Chenhao Xue, Bingzhe Wu, Zhikai Li, Qingyi Gu, Yong Jae Lee, Yan Yan, et al. Llm inference unveiled: Survey and roofline model insights. *arXiv preprint arXiv:2402.16363*, 2024. Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. Llm. int8 (): 8-bit matrix multiplication for transformers at scale. *arXiv preprint arXiv:2208.07339*, 2022. Elias Frantar and Dan Alistarh. Sparsegpt: Massive language models can be accurately pruned in one-shot. *ICML*, 2023. Mingyang Zhang, Chunhua Shen, Zhen Yang, Linlin Ou, Xinyi Yu, Bohan Zhuang, et al. Pruning meets low-rank parameter-efficient fine-tuning. *arXiv preprint arXiv:2305.18403*, 2023. Xunyu Zhu, Jian Li, Yong Liu, Can Ma, and Weiping Wang. A survey on model compression for large language models. *arXiv preprint arXiv:2308.07633*, 2023. Zechun Liu, Barlas Oguz, Changsheng Zhao, Ernie Chang, Pierre Stock, Yashar Mehdad, Yangyang Shi, Raghuraman Krishnamoorthi, and Vikas Chandra. Llm-qat: Data-free quantization aware training for large language models. *arXiv preprint arXiv:2305.17888*, 2023a. Koen Helwegen, James Widdicombe, Lukas Geiger, Zechun Liu, Kwang-Ting Cheng, and Roeland Nusselder. Latent weights do not exist: Rethinking binarized neural network optimization. *Advances in neural information processing systems*, 2019. Manuele Rusci, Alessandro Capotondi, and Luca Benini. Memory-driven mixed low precision quantization for enabling deep network inference on microcontrollers. *MLSys*, 2020. Haotong Qin, Ruihao Gong, Xianglong Liu, Mingzhu Shen, Ziran Wei, Fengwei Yu, and Jingkuan Song. Forward and backward information retention for accurate binary neural networks. In *CVPR*, 2020a. Haotong Qin, Mingyuan Zhang, Yifu Ding, Aoyu Li, Zhongang Cai, Ziwei Liu, Fisher Yu, and Xianglong Liu. Bibench: Benchmarking and analyzing network binarization. *ICML*, 2023. Zechun Liu, Zhiqiang Shen, Marios Savvides, and Kwang-Ting Cheng. Reactnet: Towards precise binary neural network with generalized activation functions. In *ECCV*, 2020a. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *CVPR*, 2009. Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. Gptq: Accurate post-training quantization for generative pre-trained transformers. *arXiv preprint arXiv:2210.17323*, 2022. Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. *arXiv:1308.3432*, 2013. Haotong Qin, Ruihao Gong, Xianglong Liu, Xiao Bai, Jingkuan Song, and Nicu Sebe. Binary neural networks: A survey. *Pattern Recognition*, 105:107281, 2020b.
9Cu8MRmhq2
While the authors effectively illustrate the motivation in Fig. 1, the advantages of the proposed OT method over DTW require more elaboration. It is advisable to include further discussions to expound upon and clarify the claims made regarding the superiority of OT over DTW.
MULTI-GRANULARITY CORRESPONDENCE LEARNING FROM LONG-TERM NOISY VIDEOS Yijie Lin\textsuperscript{1} Jie Zhang\textsuperscript{2} Zhenyu Huang\textsuperscript{1} Jia Liu\textsuperscript{1} Zujie Wen\textsuperscript{3} Xi Peng\textsuperscript{1,*} \textsuperscript{1}Sichuan University \textsuperscript{2}Beijing University of Posts and Telecommunications \textsuperscript{3}Dalian University of Technology \{linyijie.gm, pengx.gm\}@gmail.com ABSTRACT Existing video-language studies mainly focus on learning short video clips, leaving long-term temporal dependencies rarely explored due to over-high computational cost of modeling long videos. To address this issue, one feasible solution is learning the correspondence between video clips and captions, which however inevitably encounters the multi-granularity noisy correspondence (MNC) problem. To be specific, MNC refers to the clip-caption misalignment (coarse-grained) and frame-word misalignment (fine-grained), hindering temporal learning and video understanding. In this paper, we propose NOise Robust Temporal Optimal traNsport (Norton) that addresses MNC in a unified optimal transport (OT) framework. In brief, Norton employs video-paragraph and clip-caption contrastive losses to capture long-term dependencies based on OT. To address coarse-grained misalignment in video-paragraph contrast, Norton filters out the irrelevant clips and captions through an alignable prompt bucket and realigns asynchronous clip-caption pairs based on transport distance. To address the fine-grained misalignment, Norton incorporates a soft-maximum operator to identify crucial words and key frames. Additionally, Norton exploits the potential faulty negative samples in clip-caption contrast by rectifying the alignment target with OT assignment to ensure precise temporal modeling. Extensive experiments on video retrieval, videoQA, and action segmentation verify the effectiveness of our method. Code is available at https://lin-yijie.github.io/projects/Norton. 1 INTRODUCTION Video-Language Pre-training (VLP) has emerged as a popular approach for video understanding (Miech et al., 2020; Bain et al., 2021; Ge et al., 2022; Wang et al., 2022c; Luo et al., 2020) in recent years. Although promising results have been achieved, the pioneer works are mainly devoted to learning short video clips while overlooking long-term temporal dependencies. In practice, it is generally acknowledged that the long-term temporal dependency plays an indispensable role in understanding the relationships and transitions over time in various applications such as video-paragraph retrieval (Yang et al., 2023b; Sun et al., 2022) and action segmentation (Tang et al., 2019). To learn the long-term temporal correspondence from the long videos, one important challenge is the heavy demand for computation resources. For example, Han et al. (2022); Bertasius et al. (2021) employ long-form vision transformers to capture the temporal correlation, which involves computing cross-attention among every frame in long videos. As long videos are typically composed of a sequence of short video clips according to ASR timestamps (Miech et al., 2020), an alternative approach is to explore the temporal correlation among video clips and captions. For instance, TempCLR (Yang et al., 2023b) uses Dynamic Time Warping (Müller, 2007; Cuturi & Blondel, 2017; Zhou & Torre, 2009) to measure the sequential distance between video clips and captions, and incorporates the temporal correlation across clips by contrasting the video with the paragraph. This strategy is remarkably efficient than directly modeling the entire video, making it an attractive option for learning long-term temporal correspondence. However, dividing long videos into short clips would inevitably introduce an accompanied challenge, i.e., multi-granularity noisy correspondence (MNC). As shown in Fig. 1, MNC refers to the misaligned video-text pairs at two different granularities: i) Coarse-grained misalignment (Clip-caption). Coarse-grained misalignment includes asynchronous and irrelevant misalignments according to whether a clip/caption is alignable with the captions/clips in the long video. To be specific, asynchronous misalignment refers to temporal misalignment between subtitles and visual clips, e.g., $t_1$ in Fig. 1. It often occurs when people explain their actions before or after actually performing them, resulting in the mismatch between the order of statements and actions. On the other hand, irrelevant misalignment refers to irrelevant or meaningless captions that cannot be aligned with any available video clips (e.g., $t_2$ and $t_6$ in Fig. 1), and vice versa for video clips. According to Han et al. (2022), only 30% of clip-caption pairs are visually aligned in HowTo100M (Miech et al., 2019), with even fewer 15% being naturally well-aligned; ii) Fine-grained misalignment (Frame-word). Within each video clip, the narration sentences may only partially correlate with the visual frames. As depicted in Fig. 1, “the sugar goes on top” in $t_5$ is strongly correlated with visual content $v_5$ while the action “watch the glaze take off” is uncorrelated. Irrelevant words or frames can distort the identification of crucial ones and result in inaccurate similarity measurements, further contaminating the clip-caption alignment. Note that only a few methods (Han et al., 2022) consider the coarse-grained misalignment problem in temporal learning while none of them realize this fine-grained misalignment problem. Undoubtedly, MNC poses a significant obstacle to effective temporal modeling. To this end, we propose NOise Robust Temporal Optimal traNsport (Norton), a unified optimal transport approach for addressing multi-granularity noisy correspondence in temporal learning. Specifically, Norton proposes a video-paragraph and a clip-caption contrastive loss based on optimal transport (OT) to explore the temporal correlations. In video-paragraph contrast, Norton employs OT to measure sequence distances between video clips and captions from a fine-to-coarse perspective. To handle fine-grained misalignment, Norton incorporates a token-wise soft-maximum operator to identify crucial words and key frames within each clip-caption pair. This operator improves the measurement of clip-caption similarity from fine-grained multi-modal interactions. Building upon this clip-caption similarity, Norton establishes a flexible assignment between clips and captions by maximizing the global alignment similarity of OT. Based on the transport assignment, Norton realigns each video clip to multiple related captions, and vice versa, thereby mitigating the asynchronous misalignment. To further address the irrelevant misalignment, Norton introduces an alignable prompt bucket which serves as a candidate alignable target for noisy clips or captions. By discarding the ones aligned to the bucket, Norton effectively filters out meaningless content during the OT process. Note that our late interaction between clips and captions through OT alleviates the computational cost of directly modeling long videos. In clip-caption contrast, Norton tackles the faulty negative problem (Chuang et al., 2020; Yang et al., 2021b) through OT. Specifically, semantically similar clip and captions would be wrongly treated as negatives in contrastive learning (Chen et al., 2020; Lin et al., 2021; 2022; Liu et al., 2022a) and impact the clip-wise representation. Norton leverages OT assignments of within-batch clip-caption pairs as additional supervision in clip-caption contrastive loss, which exploits potential faulty negative samples and improves temporal learning. The main contributions of this work are summarized below: - We reveal multi-granularity noisy correspondence problem in temporal learning, which refers to coarse-grained asynchronous and irrelevant misalignments, as well as fine-grained misalignment. - We achieve efficient and robust correspondence learning by incorporating several innovative components such as the soft-maximum operator, alignable prompt bucket, and faulty negative exploitation within the optimal transport framework. Extensive experiments on various tasks including video retrieval, videoQA, and action segmentation verify its effectiveness. 2 RELATED WORK Video Temporal Learning. Temporal learning is a critical yet challenging topic in video understanding. Traditional works focus on integrating spatial-temporal operations into convolution (Feichtenhofer et al., 2019) or Transformer architectures (Bertasius et al., 2021; Wang et al., 2023; Sun et al., 2022). Inspired by image-language pre-training approaches (Radford et al., 2021; Jia et al., 2021), recent works leverage natural language to guide video temporal learning. Among these works, one scheme is “sorting the clips” (Zellers et al., 2021; Zeng et al., 2023a;b; Ma et al., 2023) which involves ranking the video clips according to their sequential sentences. While effective, this framework generally requires encoding long video into one sequence and entails significant computational resources. Another type of scheme proposes to leverage Dynamic Time Warping (Yang et al., 2023b; Müller, 2007; Dvornik et al., 2021) to measure the sequence distance between video clips and captions, and achieve temporal learning by aligning the video with the corresponding paragraph. Although promising results have been achieved, existing temporal learning methods suffer from the noisy correspondence problem where the ground truth order of captions w.r.t. video clips does not conform to the original timestamp order. This issue can significantly impact temporal learning, leading to suboptimal results for sorting-based and DTW-based approaches. Different from these works, this paper is dedicated to solving noisy correspondence in temporal learning and accordingly proposes an MNC-robust optimal transport framework that effectively measures sequence similarity between noisy video and paragraph. Noisy Correspondence Learning in Video-language Pre-training. Video-language pre-training has achieved promising progress thanks to large-scale datasets such as HowTo100M (Miech et al., 2019). As the text description is often not well-aligned to the visual content (Han et al., 2022), noisy correspondence learning (Huang et al., 2021; Gao et al., 2021) becomes a new fashion in VLP. To be specific, MIL-NCE (Miech et al., 2020) first studies this problem by simply aligning each video clip with multiple adjacent sentences to mitigate the impact of noise. TAN (Han et al., 2022) proposes a co-training strategy that uses mutual agreement to filter out the noisy pairs. Different from the above on-the-fly noise rectified methods, Decembert (Tang et al., 2021) generates high-quality video descriptions using an off-the-shelf image captioning model from a data collection aspect. Our method differs from existing works in two key aspects. First, the above noisy correspondence methods only consider coarse-grained asynchrony while ignoring the frame-word misalignment problem. In contrast, we point out that fine-grained misalignment can impact temporal learning and accordingly propose a unified optimal transport approach that effectively addresses noisy correspondence at both coarse and fine-grained levels. Second, our method is computationally efficient with a low memory cost. It operates in a bootstrapping manner without requiring additional models, e.g., dual networks (Han et al., 2022), momentum networks (Li et al., 2021; Han et al., 2022), or image caption models (Tang et al., 2021). These advantages make our approach more practical and scalable for real-world applications. Optimal Transport. OT is originally proposed to depict the distance between two probability distributions. Recently, OT has gained significant attention in various fields such as domain adaptation (Xu et al., 2020), clustering (Caron et al., 2020), document matching (Yu et al., 2022; Kusner et al., 2015), and sequence alignment (Su & Hua, 2017; Liu et al., 2022b). However, none of these works specifically focus on the alignment of video and text, which is the primary focus of our research. In addition to addressing the traditional sequence alignment, we point out the fine-grained misalignment problem that is specific to video-text learning. Experimental results show that the proposed multi-grained alignment effectively improves temporal learning. Figure 2: Overview of our multi-granularity correspondence learning. We perform video-paragraph contrastive learning to capture long-term temporal correlations from a fine-to-coarse perspective. Specifically, we first utilize the log-sum-exp operator on the frame-word similarity matrix to obtain fine-grained similarity between clip and caption. Additionally, we append an alignable prompt bucket on the clip-caption similarity matrix to filter out the irrelevant clips or captions. By applying Sinkhorn iterations on the clip-caption similarity matrix, we effectively tackle the asynchronous problem and obtain the optimal transport distance as the video-paragraph similarity. 3 METHOD In this section, we first introduce the overall pre-training objective of Norton in Section 3.1. Subsequently, we elaborate on our multi-granularity correspondence learning in Section 3.2 and explain how to exploit the faulty negative samples in clip-caption contrastive learning in Section 3.3. 3.1 PRE-TRAINING OBJECTIVE Given an instructional video dataset \( D = \{V_i, T_i\}_{i=1}^N \), where \( V_i \) and \( T_i \) represent the video and paragraph of \( i \)-th instance, we formulate each video/paragraph as a sequence of video clips/captions according to the ASR timestamps. Specifically, we mark the video clips and captions in \( i \)-th video as \( \{v_{ia}\}_{a=1}^n \) and \( \{t_{ib}\}_{b=1}^m \). Here \( \{v_{ia}\}_{a=1}^n \) and \( \{t_{ib}\}_{b=1}^m \) represent the frames and words within \( v_a \) and \( t_b \), where \( f \) and \( w \) represent the length of the clip and caption. Based on the above definitions, we propose the following training objectives: \[ L = L_{\text{clip}} + \lambda L_{\text{video}}, \] where video-paragraph contrastive loss \( L_{\text{video}} \) explores the temporal correlations between the long video \( V_i \) and its corresponding paragraph \( T_i \) through a novel noise robust temporal optimal transport distance. The clip-caption contrastive loss \( L_{\text{clip}} \) exploits potential faulty negative samples to improve clip representation and ensure accurate temporal modeling. We will elaborate on these two losses in the following sections. 3.2 CORRESPONDENCE LEARNING VIA ROBUST OPTIMAL TRANSPORT As long videos are typically composed of a sequence of short video clips, we propose to use the optimal transport distance between video clips and captions as the similarity criterion for video-paragraph contrastive learning in a robust and efficient way. Let \( S \in \mathbb{R}^{n \times m} \) denote the clip-caption similarity matrix where \( [S]_{a,b} \) measures the similarity between clip \( v_a \) and caption \( t_b \). \( Q \in \mathbb{R}_+^{n \times m} \) denotes the corresponding transport assignment where \( [Q]_{a,b} \) represents the probabilities of aligning \( v_a \) with \( t_b \). Optimal transport seeks to establish a flexible alignment between clips and captions by maximizing global similarity \( (Q, S) = \text{tr}(Q^\top S) \). Formally, the objective of optimal transport is defined as follows: \[ \max_{Q \in \mathcal{Q}} \quad (Q, S) + \varepsilon H(Q) \] subject to \[ \mathcal{Q} = \left\{ Q \in \mathbb{R}_+^{n \times m} \mid Q1_m = \mu, Q^\top 1_n = \nu \right\}, \] where \( 1_m \) represents the vector of ones in dimension \( m \), \( \mu \in \mathbb{R}^n \) and \( \nu \in \mathbb{R}^m \) indicate the relative importance of each clip or caption. Since each clip or caption is sampled independently, we choose uniform probability distribution \( \mu = \frac{1}{n}1_n \) and \( \nu = \frac{1}{m}1_m \) to assign equal weight to each instance following Su & Hua (2017). $H(Q)$ is an entropy regularizer derived from the optimization perspective (Cuturi, 2013) and $\varepsilon$ controls its smoothness. As illustrated in Eq. (2), optimal transport can realign each clip or caption to multiple related captions or clips based on global similarity, thus effectively resolving the potential asynchronous misalignment problem between the two modalities. The optimal $Q^*$ of Eq. (2) has a simple normalized exponential matrix solution by Sinkhorn fixed point iterations (Cuturi, 2013), $$Q^* = \text{Diag}(\kappa_1) \exp \left( \frac{S}{\varepsilon} \right) \text{Diag}(\kappa_2),$$ with iteratively updated $\kappa_1 \leftarrow \mu ./ (\exp (S/\varepsilon) \kappa_2)$, $\kappa_2 \leftarrow \nu ./ (\exp (S^\top/\varepsilon) \kappa_1)$, where $\kappa_1 \in \mathbb{R}^n$, $\kappa_2 \in \mathbb{R}^m$ are the non-negative left and right scaling vectors. By utilizing OT distance between clips and captions as the video-paragraph similarity, our video-paragraph contrastive loss captures the long-term temporal dependencies as follows, $$L_{\text{video}} = -\sum_{i=1}^{N} \left( \log \frac{\exp (\langle Q_{ii}, S_{ii} \rangle / \tau)}{\sum_{j=1}^{N} \exp (\langle Q_{ij}, S_{ij} \rangle / \tau)} + \log \frac{\exp (\langle Q_{ii}, S_{ii} \rangle / \tau)}{\sum_{j=1}^{N} \exp (\langle Q_{ji}, S_{ji} \rangle / \tau)} \right),$$ where $S_{ij} \in \mathbb{R}^{n \times m}$ is the clip-caption similarity matrix between video $V_i$ and paragraph $T_j$, $Q_{ij}$ is the corresponding transport assignment of $S_{ij}$, and $\tau$ is a learnable temperature initialized as 0.07. Note that when calculating Eq. (4), we stop the gradient of the transport assignment $Q$ to keep the stability of our video-paragraph contrastive loss. To ensure the discriminative capacity of the model, we search the nearest videos as the hard negative samples following Xu et al. (2021). By using optimal transport to measure sequence distance instead of directly modeling the long videos, our method significantly reduces computational cost. A detailed training efficiency discussion is placed in Appendix C. However, the optimal transport objective Eq. (2) still has some limitations: i) OT estimates the sequence distance based on clip-caption similarity (coarse-grained), leaving word-frame misalignment (fine-grained) problem unexplored; ii) OT requires each source instance must exactly map to the targets, which is not practical when dealing with a large amount of meaningless text. To address these challenges, we propose a soft-maximum operator for fine-grained alignment and an alignment prompt bucket to filter out meaningless clips and captions for noise robust distance estimation. **Fine-grained Alignment.** Most previous works (Xu et al., 2021; Yang et al., 2023b; Han et al., 2022) typically encode frames or words to a global feature using [CLS] token or averaging the frame or word embeddings (e.g., AvgPool($\{v_a^f\}_{i=1}^f$)). However, such strategies neglect fine-grained interactions between modalities and do not address the problem of frame-word misalignment. To address this issue, we propose a cross-modal late interaction mechanism to identify crucial words and key frames for fine-grained alignment inspired by Yao et al. (2022); Wang et al. (2022b). Specifically, we define the fine-grained similarity between clip $v_a$ and caption $t_b$ as follows: $$[S]_{a,b} = \frac{1}{2} \left( \frac{1}{f} \sum_{i=1}^{f} \alpha \log \left( \sum_{j=1}^{w} \exp \left( \frac{v_a^i \cdot t_b^j}{\alpha} \right) \right) + \frac{1}{w} \sum_{i=1}^{w} \alpha \log \left( \sum_{j=1}^{f} \exp \left( \frac{t_b^i \cdot v_a^j}{\alpha} \right) \right) \right).$$ Take the front part for example, for each frame in the video clip, we identify the most important words through a soft-maximum operation, i.e., log-sum-exp approximation (Beck & Teboulle, 2012), and then compute the average soft-maximum similarities of all frames as shown in Fig. 2. Similarly, for each textual token, we also find its related video frames in the latter part of Eq. (5). The parameter $\alpha$ magnifies the importance of the most relevant words or frames. As $\alpha$ approaches 0, the log-sum-exp approximates the maximum. Specifically, this soft-maximum operation allows us to reduce the negative influence of background words or frames on clip-caption similarity estimation. Though inspired from Wang et al. (2022b); Yao et al. (2022), our method differs in several aspects. Firstly, we introduce a straightforward log-sum-exp operator as a soft approximation of the maximum. This allows us to concentrate on more crucial words, making it particularly well-suited for video content as opposed to images. Experiments in Table 7 demonstrate that our design yields a substantial improvement compared to solely focusing on the most important item. Secondly, we leverage the estimated clip-caption similarity for sequence alignment, effectively enhancing temporal learning. In contrast, Wang et al. (2022b) exclusively concentrates on clip-caption alignment. Alignable Prompt Bucket. Optimal transport requires every source instance to exactly map to the targets. Yet, in real-world scenarios, a significant amount of captions and video clips might be noisy or irrelevant that cannot be aligned, i.e., coarse-grained irrelevant misalignments. Motivated by Sarlin et al. (2020), we propose an innovative solution that uses an alignable prompt bucket (APB) to filter out semantic irrelevant clips and captions. As shown in Fig. 2, the prompt bucket consists of one new row and column, filled with the same value $p$. The prompt bucket is appended to the similarity matrix $\mathbf{S}$ that $$[\bar{\mathbf{S}}]_{a,m+1} = [\bar{\mathbf{S}}]_{n+1,b} = [\bar{\mathbf{S}}]_{n+1,m+1} = p, \quad [\bar{\mathbf{S}}]_{a,b} = [\mathbf{S}]_{a,b}, \forall a \in [1,n], b \in [1,m].$$ (6) When calculating the transport distance given $\bar{\mathbf{S}}$, each video clip can be aligned with either available captions or the prompt bucket. Substituting Eq. (2) with Eq. (6), we obtain the final optimal transport assignment by dropping the last row and column of the transport assignment, i.e., $\hat{\mathbf{Q}}^* = \hat{\mathbf{Q}}^*_{1:n,1:m}$. From an intuitionial viewpoint, the prompt value $p$ in Eq. (6) serves as a similarity margin that distinguishes between alignable and unalignable clips and captions. If a video clip $v_a$ lacks an alignable caption, its pairwise similarities with the set of captions $\{t_b\}_{b=1}^{m}$ are generally small. Consequently, if the margin $p$ is larger than these pairwise similarity values, $v_a$ is forced to align with the prompt bucket and subsequently filtered from the transport assignment. In our implementation, we determine the value of $p$ as the bottom 30% similarity of the original aligned clip-caption pairs in a data-driven manner. 3.3 Clip-caption alignment via faulty negative exploitation Since self-supervised contrastive learning (He et al., 2020) relies on the random sampling of negative instances, captions that are semantically similar to the anchor clips can be treated as faulty negatives (Han et al., 2020; Zolfaghari et al., 2021), and vice versa. However, the existing one-hot target used in contrastive learning penalizes all negative predictions regardless of their correlations. To mitigate this issue, we propose to exploit the faulty negatives through optimal transport. Let $\hat{\mathbf{S}} \in \mathbb{R}^{B \times B}$ denotes the within-batch clip-caption similarity matrix where $B$ represents the number of clips/captions for all videos in the batch. We apply optimal transport on the similarity matrix $\hat{\mathbf{S}}$, $$\max_{\hat{\mathbf{Q}} \in \hat{\mathcal{Q}}} \langle \hat{\mathbf{Q}}, \hat{\mathbf{S}} \rangle + \varepsilon H(\hat{\mathbf{Q}}) \quad \text{s.t.} \quad \hat{\mathbf{Q}} = \left\{ \hat{\mathbf{Q}} \in \mathbb{R}_+^{B \times B} \mid \hat{\mathbf{Q}} \mathbf{1}_B = \frac{1}{B} \mathbf{1}_B, \hat{\mathbf{Q}}^\top \mathbf{1}_B = \frac{1}{B} \mathbf{1}_B \right\},$$ (7) where the transport assignment $\hat{\mathbf{Q}}$ attempts to realign the clips with similar captions (i.e., faulty negatives). After implementing the Sinkhorn algorithm described in Eq. (3), we utilize the clip-wise realigned targets $\hat{\mathbf{Q}}^*$ as additional supervision for contrastive learning, $$L_{\text{clip}} = -\sum_{i=1}^{B} \sum_{j=1}^{B} [T]_{i,j} \left( \log \frac{\exp([\hat{\mathbf{S}}]_{i,j}/\tau)}{\sum_{k=1}^{B} \exp([\hat{\mathbf{S}}]_{i,k}/\tau)} + \log \frac{\exp([\hat{\mathbf{S}}]_{j,i}/\tau)}{\sum_{k=1}^{B} \exp([\hat{\mathbf{S}}]_{k,j}/\tau)} \right), \quad T = (1 - \beta) \mathbf{I}_B + \beta \hat{\mathbf{Q}}^*,$$ (8) where $\beta$ is a weighted parameter that balances the identity target $\mathbf{I}_B$ and realigned targets $\hat{\mathbf{Q}}^*$. By replacing identity matrix $\mathbf{I}_B$ with estimated soft-alignment probabilities, the model can recalibrate the attractive and repulsive forces between clips and captions. Specifically, the entire training batch is treated as a support set (Patrick et al., 2021) with a subset of relevant clips and captions. Our method enables the detection and correction of potential faulty negatives within the set. 4 Experiments We verify the effectiveness of Norton in comprehending both long and short videos across a range of downstream tasks. Additionally, we perform extensive ablation studies to analyze the impact of different design choices on the model’s performance. For comprehensive training details, training efficiency results, and additional experiments please refer to the Appendix. 4.1 Comparisons on Video-paragraph retrieval As the main contribution of this work lies in long-term temporal learning, we first evaluate our method on the video-paragraph retrieval task. The objective of this task is to accurately find the corresponding video using a set of sentence queries that describe different parts of the long video. Table 1: Video-paragraph retrieval on YouCookII (Background Removed). The best and second-best results are **bold** and underlined, respectively. | Approach | Measure | R@1 | R@5 | R@10 | |-------------------|---------|-------|-------|-------| | MIL-NCE (Miech et al., 2020) | Cap. Avg. | 43.1 | 68.6 | 79.1 | | HT100M (Miech et al., 2019) | Cap. Avg. | 46.6 | 74.3 | 83.7 | | MCN (Chen et al., 2021) | Cap. Avg. | 53.4 | 75.0 | 81.4 | | VideoCLIP (Xu et al., 2021) | Cap. Avg. | 74.5 | 94.5 | 97.9 | | TempCLR (Yang et al., 2023b) | Cap. Avg. | 74.5 | 94.6 | 97.0 | | Norton (Ours) | Cap. Avg. | **75.5** | **95.0** | **97.7** | | VideoCLIP (Xu et al., 2021) | DTW | 56.0 | 89.9 | 96.3 | | TempCLR (Yang et al., 2023b) | DTW | 83.5 | 97.2 | 99.3 | | Norton (Ours) | DTW | **88.7** | **98.8** | **99.5** | | VideoCLIP (Xu et al., 2021) | OTAM | 52.8 | 89.2 | 95.0 | | TempCLR (Yang et al., 2023b) | OTAM | 84.9 | 97.9 | 99.3 | | Norton (Ours) | OTAM | **88.9** | **98.4** | **99.5** | Table 2: Video-paragraph retrieval on YouCookII (Background Kept). | Approach | R@1 | R@5 | R@10 | |-------------------|-------|-------|-------| | VideoCLIP | 73.6 | **94.7** | **98.4** | | TempCLR | 71.7 | 94.5 | 97.9 | | Norton (Ours) | **74.8** | **94.7** | **98.4** | | VideoCLIP | 55.7 | 93.1 | **98.9** | | TempCLR | 70.4 | 93.8 | 97.9 | | Norton (Ours) | **76.1** | **95.0** | 97.7 | | VideoCLIP | 56.6 | 92.8 | **98.9** | | TempCLR | 72.2 | 94.5 | 97.7 | | Norton (Ours) | **73.6** | **94.7** | **97.7** | Setup and Metric. We evaluate the zero-shot performance of our method in two different settings, namely, Background Removed and Background Kept. The former setting discards the text-uncorrelated video clips based on the timestamps, while the latter uses the full video. As timestamps may not always be available, paragraph retrieval with background is a more realistic scenario. To provide a comprehensive evaluation, we employ three standard strategies, namely, Cap. Avg. (Caption Average), DTW, and OTAM (Ordered Temporal Alignment Module (Cao et al., 2020)). Specifically, Cap. Avg. matches one clip for each caption and retrieves the video with the most matched clips. DTW and OTAM calculate the sequence distance by accumulating the clip-caption distance based on chronological order. We report recall metrics R@1, R@5, and R@10 for all setups. Specifically, R@1 indicates how often the correct prediction is the first result, which is highly desirable in many applications, while R@10 provides a wider scope and may be less critical as users typically focus on the top few results in practical scenarios. Datasets. We conduct the evaluation on YouCookII (Zhou et al., 2018) where the testing data consists of 436 videos with 3,350 clip-caption pairs in total. The videos existing in YouCookII have been removed from Howto100M (Miech et al., 2019) following the same protocol as previous works (Miech et al., 2020; Xu et al., 2021; Yang et al., 2023b). Results. i) Background Removed: As shown in Table 1, TempCLR (Yang et al., 2023b) performs remarkably better than VideoCLIP (Xu et al., 2021) in terms of DTW and OTAM, as it is trained to explore the global temporal context. However, all these methods suffer from noisy correspondence in the temporal alignment. In contrast, our proposed robust optimal transport framework explicitly overcomes multi-granularity noisy correspondence. Specifically, our method effectively improves the performance of all measurements by a large margin (+1% Cap. Avg., 5.2% DTW, and 4% OTAM in terms of R@1), indicating that our method learns better temporal information. ii) Background Kept: As shown in Table 2, compared with the Background Removed results, the recall of all methods dropped as the irrelevant information in the background can distract the video features. Nevertheless, our proposed method consistently outperformed VideoCLIP and TempCLR, even under such challenging conditions. 4.2 Evaluation on Diverse Downstream Tasks To verify the generalization of our method, we conduct experiments on four downstream tasks with four datasets described below. Text-to-Video retrieval (clip level). This task aims to find a corresponding video clip given a query caption. We use YouCookII (Zhou et al., 2018) and MSR-VTT (Xu et al., 2016) to evaluate the transferability of our method. MSR-VTT (Xu et al., 2016) is a well-known retrieval benchmark containing 10,000 short videos with 20 captions each. Following Xu et al. (2021), we utilize the 1,000 clip-caption test pairs for evaluation. For YouCookII, we use 3,350 clip-caption pairs as introduced in Section 4.1. As shown in Table 3, our method achieves remarkable improvement over state-of-the-art methods on YouCookII. On MSR-VTT (Table 5), our method shows solid improvements especially about 1.9% R@5 and 1.6% R@10 zero-shot improvement compared with VideoCLIP. After fine-tuning, our method still reaches state-of-the-art R@1. Here we include SupportSet (Patrick et al., 2021) and Frozen (Bain et al., 2021) for completeness, while they use different pre-training data such as 65 million Instagram videos (Ghadiyaram et al., 2019), 2.5 million WebVid videos (Bain et al., 2021) and 3 million Google Conceptual Captions (Sharma et al., 2018). The results in this clip-caption retrieval experiment indicate that our method not only improves the global temporal information (long video retrieval as shown in Section 4.1), but also facilitates clip-level representation learning. ### Table 3: Clip-caption retrieval on YouCookII. | Approach | Feature | R@1 | R@5 | R@10 | |-------------------|-------------|-----|-----|------| | ActBERT (Zhu & Yang, 2020) | R101+Res3D | 9.6 | 26.7 | 38.0 | | MIL-NCE (Miech et al., 2020) | S3D-G | 15.1 | 38.0 | 51.2 | | MCN (Chen et al., 2021) | R152+RX101 | 18.1 | 35.5 | 45.2 | | TACo (Yang et al., 2021a) | S3D-G | 19.9 | 43.2 | 55.7 | | VT-TWINS (Ko et al., 2022) | S3D-G | 9.7 | 27.0 | 38.8 | | MMFT (Shvetsova et al., 2022) | S3D-G | 19.8 | 42.9 | 55.1 | | TAN (Han et al., 2022) | S3D-G | 20.1 | 45.5 | 59.5 | | VideoCLIP (Xu et al., 2021) | S3D-G | 22.7 | 50.4 | 63.1 | | TempCLR (Yang et al., 2023b) | S3D-G | 23.3 | 51.0 | **64.5** | | Norton (Ours) | S3D-G | **24.2** | **51.9** | **64.1** | ### Table 4: Action segmentation on COIN. | Approach | Frame Accuracy | |-------------------|----------------| | VAVA (Liu et al., 2022b) | 47.3 | | ActBERT (Zhu & Yang, 2020) | 57.0 | | Drop-DTW (Dvornik et al., 2021) | 59.6 | | MIL-NCE (Miech et al., 2020) | 61.0 | | ClipBERT (Lei et al., 2021) | 65.4 | | TACo (Yang et al., 2021a) | 68.4 | | VideoCLIP (Xu et al., 2021) | 68.7 | | TempCLR (Yang et al., 2023b) | 68.7 | | Norton (Ours) | **69.8** | ### Table 5: Text-to-video retrieval on MSR-VTT. | Supervised | R@1 | R@5 | R@10 | |------------|-----|-----|------| | SupportSet (Patrick et al., 2021) | 30.1 | 58.5 | 69.3 | | Frozen (Bain et al., 2021) | 31.0 | 59.5 | 70.5 | | MMFT (Shvetsova et al., 2022) | 23.7 | 52.1 | 63.7 | | VideoCLIP (Xu et al., 2021) | 30.9 | 55.4 | **66.8** | | TempCLR (Yang et al., 2023b) | 30.6 | 55.1 | 65.5 | | Norton (Ours) | **31.2** | **55.7** | **66.8** | | Zero-shot | | | | | SupportSet (Patrick et al., 2021) | 8.7 | 23.0 | 31.1 | | Frozen (Bain et al., 2021) | 23.2 | 44.6 | 56.6 | | MIL-NCE (Miech et al., 2020) | 9.9 | 24.0 | 32.4 | | MMFT (Shvetsova et al., 2022) | 9.9 | 24.0 | **32.6** | | VT-TWINS (Ko et al., 2022) | 9.4 | 23.4 | 31.6 | | VideoCLIP (Xu et al., 2021) | 10.4 | 22.2 | 30.0 | | TempCLR (Yang et al., 2023b) | 10.1 | 22.2 | 29.4 | | Norton (Ours) | **10.7** | **24.1** | 31.6 | ### Table 6: VideoQA on MSR-VTT. | Supervised | Accuracy | |------------|----------| | ETTanque (Kaufman et al., 2017) | 65.5 | | MLB (Kim et al., 2016) | 76.1 | | JSFusion (Yu et al., 2018) | 83.4 | | ActBERT (Zhu & Yang, 2020) | 85.7 | | ClipBERT (Lei et al., 2021) | 88.2 | | MERLOT (Zellers et al., 2021) | 90.9 | | VideoCLIP (Xu et al., 2021) | 92.1 | | TempCLR (Yang et al., 2023b) | 92.2 | | Norton (Ours) | **92.7** | | Zero-shot | | | | VideoCLIP (Xu et al., 2021) | 73.9 | | TempCLR (Yang et al., 2023b) | 74.4 | | Norton (Ours) | **77.1** | **VideoQA.** We conduct the multiple choice VideoQA experiment on MSR-VTT (Yu et al., 2018). Given a video query and some candidate textual answers (5 on average), the task is to find the one that fits the query out of possible candidates. As shown in Table 6, our method outperforms the counterparts with +2.7% in terms of zero-shot accuracy and achieves 0.5% improvements after finetuning, showing the superiority of our method. **Action Segmentation.** This task assumes that each video is associated with various actions. The goal is to determine the specific action for each second, which requires fully exploring the temporal dependencies. We use the long video dataset COIN (Tang et al., 2019) to evaluate the action segmentation performance of our method. COIN contains 11,827 videos (476 hours) in total where each video is labeled with 3.91 action segments on average, according to 778 candidate segment labels. Following Xu et al. (2021), we apply a one-layer classification head on top of the visual encoder to classify the action label. We report the frame-wise accuracy using the evaluation protocol of Xu et al. (2021); Miech et al. (2020). As shown in Table 4, our method outperforms all baselines. Table 7: **Ablation experiments** evaluated on YouCookII, where “Clip” is short for clip-caption retrieval, “Video” for video-paragraph retrieval, “B” for video backgrounds, and “FNE” for faulty negative exploitation. We report the DTW measurement for video-paragraph retrieval. | Basic Setting | Clip | Video (w/o B) | Video (w B) | |---------------|------|---------------|-------------| | Model | FNE | Soft-max α | APB p | R@1 | R@5 | R@1 | R@5 | R@1 | R@5 | | VideoCLIP (Xu et al., 2021) | – | – | – | 22.7 | 50.4 | 56.0 | 89.9 | 55.7 | 93.1 | | TempCLR (Yang et al., 2023b) | – | – | – | 23.3 | 51.0 | 83.5 | 97.2 | 70.4 | 93.8 | | A (w/o \(L_{video}\)) | ✓ | – | – | 22.8 | 50.1 | 56.7 | 89.0 | 56.4 | 91.8 | | B (w/o \(L_{video}\)) | ✓ | – | – | 23.4 | 50.8 | 63.3 | 93.3 | 65.1 | 92.4 | | C | ✓ | Mean average | – | 23.1 | 50.1 | 84.2 | 97.3 | 74.3 | 94.7 | | D | ✓ | (Yao et al., 2022) | – | 23.5 | 50.5 | 86.9 | 98.6 | 74.1 | 94.6 | | E | ✓ | 0.1 | – | 23.8 | 51.7 | 88.1 | 98.6 | 74.2 | 94.7 | | F | ✓ | 0.2 | – | 24.0 | 51.8 | 88.2 | 98.6 | 74.9 | 94.4 | | G | ✓ | 1 | – | 24.0 | 51.8 | 88.4 | 98.8 | 75.2 | 94.7 | | H | ✓ | 1 | 10% | 24.2 | 51.8 | 88.4 | 98.8 | 75.9 | 94.9 | | I | ✓ | 1 | 50% | 24.2 | 51.9 | 88.4 | 98.6 | 75.9 | 94.9 | | J (Norton) | ✓ | 1 | 30% | 24.2 | 51.9 | 88.7 | 98.8 | 76.1 | 95.0 | ### 4.3 Ablation Study on the Proposed Methods In this section, we investigate the effects of our design choices and discuss the results in Table 7. **Effect of Faulty Negative Exploitation.** In model-\{A,B\}, we tackle the issue of faulty negatives in clip-caption contrastive learning through the correction of optimal transport. This strategy not only improves the performance of clip-caption retrieval but also enhances the temporal ability. **Effect of OT in Temporal Learning.** In model-C, we utilize vanilla optimal transport to measure the distance between sequences where the clip/caption representation is obtained by averaging the frame/word embeddings. As shown, model-C achieves comparable performance to TempCLR and even outperforms TempCLR in retrieval tasks involving backgrounds. **Effect of Fine-grained Alignment.** In model-\{D,E,F,G\}, we investigate the effect of fine-grained alignment by varying the weight of the log-sum-exp approximation. We also compare our approach with Yao et al. (2022) which selects the most important token for fine-grained alignment. The comparison demonstrates that our strategy outperforms Yao et al. (2022), supporting our claim that focusing on more crucial words/frames yields better fine-grained measurements in video understanding. When the weight \( \alpha \) tends towards 0, the log-sum-exp approximation approximates the maximum, resulting in the selection of the most relevant words/frames. The comparison between model-\{E,F,G\} shows that a larger \( \alpha \) leads to better performance, further validating our assumption that focusing on more important tokens would enhance performance. **Effect of Alignable Prompt Bucket.** In model-\{H,I,J\}, we integrate the prompt bucket into the optimal transport framework and vary the value of \( p \) to be the bottom 10%, 30%, and 50% similarity between the original aligned clips and captions. We observe that the use of APB results in a clear performance improvement for video-paragraph retrieval with background, and setting the value of \( p \) to the bottom 30% similarity is an effective choice. ### 5 Conclusion Learning temporal correlations in long-form videos is prohibitively expensive in terms of the hardware required. To address this, we propose Norton, a noise robust temporal optimal transport to estimate the sequence distance that can be easily extended and scaled to larger datasets with minimal computational cost. Notably, our unified optimal transport solution resolves the noisy correspondence problem at both frame-word and clip-caption levels. Extensive experiments demonstrate that our method not only captures long-term temporal dependencies but also facilitates clip-level representation learning. In the future, we plan to extend our method to address noisy correspondence for more modalities as videos typically include visual, textual, and audio content. ACKNOWLEDGMENTS This work was supported in part by NSFC under Grant U21B2040, 62176171; and in part by the Fundamental Research Funds for the Central Universities under Grant CJ202303. REFERENCES Max Bain, Arsha Nagrani, Gül Varol, and Andrew Zisserman. Frozen in time: A joint video and image encoder for end-to-end retrieval. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1728–1738, 2021. Amir Beck and Marc Teboulle. Smoothing and first order methods: A unified framework. SIAM Journal on Optimization, 22(2):557–580, 2012. Gedas Bertasius, Heng Wang, and Lorenzo Torresani. Is space-time attention all you need for video understanding? In International Conference on Machine Learning, volume 2, pp. 4, 2021. Kaidi Cao, Jingwei Ji, Zhangjie Cao, Chien-Yi Chang, and Juan Carlos Niebles. Few-shot video classification via temporal alignment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10618–10627, 2020. Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. Advances in Neural Information Processing Systems, 33:9912–9924, 2020. Brian Chen, Andrew Rouditchenko, Kevin Duarte, Hilde Kuehne, Samuel Thomas, Angie Boggust, Rameswar Panda, Brian Kingsbury, Rogerio Feris, David Harwath, et al. Multimodal clustering networks for self-supervised learning from unlabeled videos. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8012–8021, 2021. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning, pp. 1597–1607. PMLR, 2020. Ching-Yao Chuang, Joshua Robinson, Yen-Chen Lin, Antonio Torralba, and Stefanie Jegelka. De-biased contrastive learning. Advances in Neural Information Processing Systems, 33:8765–8775, 2020. Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. Advances in Neural Information Processing Systems, 26, 2013. Marco Cuturi and Mathieu Blondel. Soft-dtw: a differentiable loss function for time-series. In International Conference on Machine Learning, pp. 894–903. PMLR, 2017. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. Mikita Dvornik, Isma Hadji, Konstantinos G Derpanis, Animesh Garg, and Allan Jepson. Drop-dtw: Aligning common signal between sequences while dropping outliers. Advances in Neural Information Processing Systems, 34:13782–13793, 2021. Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6202–6211, 2019. Zijian Gao, Jingyu Liu, Weiqi Sun, Sheng Chen, Dedan Chang, and Lili Zhao. Clip2tv: Align, match and distill for video-text retrieval. arXiv preprint arXiv:2111.05610, 2021. Yuying Ge, Yixiao Ge, Xihui Liu, Dian Li, Ying Shan, Xiaohu Qie, and Ping Luo. Bridging video-text retrieval with multiple choice questions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16167–16176, 2022.
KkrDUGIASk
The paper mentioned new agent privacy issue. I assume the late participate will require the new agent to access the fused feature to do the update according to equations in Section 4.3. Will the fused feature release some privacy of the old agents to the new agent?
AN EXTENSIBLE FRAMEWORK FOR OPEN HETEROGENEOUS COLLABORATIVE PERCEPTION Yifan Lu\textsuperscript{1,4}, Yue Hu\textsuperscript{1,4}, Yiqi Zhong\textsuperscript{2}, Dequan Wang\textsuperscript{1,3}, Yanfeng Wang\textsuperscript{1,3}, Siheng Chen\textsuperscript{1,3,4}\textsuperscript{✉}, \textsuperscript{1} Shanghai Jiao Tong University, \textsuperscript{2} University of Southern California, \textsuperscript{3} Shanghai AI Lab \textsuperscript{4} Multi-Agent Governance & Intelligence Crew (MAGIC) \textsuperscript{1} \{yifan.lu, 18671129361, dequanwang, wangyanfeng, sihengc\}@sjtu.edu.cn \textsuperscript{2} yiqizhon@usc.edu ABSTRACT Collaborative perception aims to mitigate the limitations of single-agent perception, such as occlusions, by facilitating data exchange among multiple agents. However, most current works consider a homogeneous scenario where all agents use identity sensors and perception models. In reality, heterogeneous agent types may continually emerge and inevitably face a domain gap when collaborating with existing agents. In this paper, we introduce a new open heterogeneous problem: how to accommodate continually emerging new heterogeneous agent types into collaborative perception, while ensuring high perception performance and low integration cost? To address this problem, we propose HEterogeneous ALliance (HEAL), a novel extensible collaborative perception framework. HEAL first establishes a unified feature space with initial agents via a novel multi-scale foreground-aware Pyramid Fusion network. When heterogeneous new agents emerge with previously unseen modalities or models, we align them to the established unified space with an innovative backward alignment. This step only involves individual training on the new agent type, thus presenting extremely low training costs and high extensibility. To enrich agents’ data heterogeneity, we bring OPV2V-H, a new large-scale dataset with more diverse sensor types. Extensive experiments on OPV2V-H and DAIR-V2X datasets show that HEAL surpasses SOTA methods in performance while reducing the training parameters by 91.5% when integrating 3 new agent types. We further implement a comprehensive codebase at: https://github.com/yifanlu0227/HEAL 1 INTRODUCTION Multi-agent collaborative perception promotes better and more holistic perception by enabling multiple agents to share complementary perceptual information with each other (Wang et al., 2020; Xu et al., 2022c; Li et al., 2021). This task can fundamentally overcome several long-standing issues in single-agent perception, such as occlusion (Wang et al., 2020). The related methods and systems have tremendous potential in many applications, including multi-UAVs (unmanned aerial vehicles) for search and rescue (Hu et al., 2022), multi-robot automation and mapping (Carpin, 2008), and vehicle-to-vehicle (V2V) and vehicle-to-everything (V2X) collaboration. In this emerging field, most current works (Lu et al., 2023; Lei et al., 2022) make a plausible, yet oversimplified assumption: all the agents have to be homogeneous; that is, all agents’ perception systems use the same sensor modality and share the same detection model. However, in the real world, the modalities and models of agents are likely to be heterogeneous, and new agent types may continuously emerge. Due to the rapid iteration of sensor technologies and perception algorithms, coupled with the various attitudes of agent owners (like autonomous driving companies) towards collaborative perception, it is inherently challenging to definitively determine all agent types from the outset. When a heterogeneous agent, which has never appeared in the training set, wishes to join the collaboration, it inevitably encounters a domain gap with the existing agents. This gap substantially impedes its capability to fuse features with the existing collaborative agents and markedly limits the extensibility of the collaborative perception. Thus, the problem of open heterogeneous collaborative perception arises: how to accommodate continually emerging new agent types into the existing collaborative perception while ensuring high perception performance and low integration cost? Figure 1: (a) homogeneous setting, where agents have identical modality and model. (b) heterogeneous setting, where agents’ modalities and models are distinct but pre-determined. (c) Open heterogeneous setting, where new types of agents want to join collaboration with previously unseen modalities or models. (d) HEAL holds the SOTA performance while minimizing the training cost (model parameters here) when integrating a new agent type. The bullseye represents the best. The designation open heterogeneous underscores the unpredictable essence of the incoming agent’s modality and model; see Figure. 1 for an illustration. To address this issue, one viable solution is late fusion. By fusing each agent’s detection outputs, late fusion bypasses the heterogeneity among new agents and existing agents. However, its performance is suboptimal and has been shown particularly vulnerable to localization noise (Lu et al., 2023) and communication latency (Wang et al., 2020). Another potential approach is fully collective training like HM-ViT (Xiang et al., 2023), which aggregates all agent types in training to fill domain gaps. However, this approach requires retraining the entire model every time a new agent type is introduced. This becomes increasingly training-expensive as new agents continuously emerge. To address this open heterogeneous collaborative perception problem, we propose HEterogeneous ALliance (HEAL), a novel extensible framework that integrates new agent types into collaboration with ultra-low costs. The core idea is to sustain a unified feature space for multi-agent collaboration and ensure new agent types align their features to it. HEAL has two training phases: collaboration base training and new agent type training. In the first phase, HEAL sets initial agents as the collaboration base and undertakes collective end-to-end training to create a robust unified feature space for all agents. It uses the innovative Pyramid Fusion, a multi-scale and foreground-aware network, to fuse features and promote the learning of the unified space. In the next phase, when agents with a new heterogeneous type aim to join the collaboration, HEAL designs a novel backward alignment mechanism for their individual training. The inherited Pyramid Fusion module acts as new agents’ detection back-end, with only new agents’ front-end encoders updated. This prompts new agents to align their features with the unified feature space. Such individual training eliminates the high costs associated with collective retraining when adding new agent types, presenting extremely low model size, FLOPs, training time, and memory consumption. Further, it preserves the new agents’ model and data privacy. As the backward alignment can be conducted locally, it protects new agents’ model details and allows agent owners to use their sensor data for training, significantly addressing automotive companies’ data- and model-privacy concerns. Once the training is complete, all agents of the new type can join the alliance with feature-level collaboration. By repeating the second phase, the alliance can continuously incorporate new types of agents as they emerge. To evaluate HEAL and further promote open heterogeneous collaborative perception, we propose a large-scale heterogeneous collaborative perception dataset, OPV2V-H, which supplements more sensor types based on the existing OPV2V (Xu et al., 2022c). Extensive experiments on OPV2V-H and real-world dataset DAIR-V2X (Yu et al., 2022) show HEAL’s remarkable performance. In the experiment of successively adding 3 types of heterogeneous agents, HEAL outperforms the other methods in collaborative detection performance while reducing 91.5% of the training parameters compared with SOTA. We summarize our contributions as follows: • In considering the scenario of continually emerging new heterogeneous agents, we present HEAL, the first extensible heterogeneous collaborative perception framework. HEAL ensures extensibility by establishing a unified feature space and aligning new agents to it. • We propose a powerful Pyramid Fusion for the collaboration base training, which utilized multiscale and foreground-aware designs to sustain a potent unified feature space. To integrate new types of agents, we introduce a novel backward alignment mechanism to align heterogeneous agents to the unified space. This training is conducted locally on single agents, reducing training costs while also preserving model details. We propose a new dataset OPV2V-H to facilitate the research of heterogeneous collaborative perception. Extensive experiments on OPV2V-H and real-world DAIR-V2X datasets demonstrate HEAL’s SOTA performance and ultra-low training expenses. 2 RELATED WORKS 2.1 COLLABORATIVE PERCEPTION The exchange of perception data among agents enables collaborative agents to achieve a more comprehensive perceptual outcome (Wang et al., 2020; Yu et al., 2022; Li et al., 2021; Hu et al., 2023; Wei et al., 2023; Liu et al., 2020a; Li et al., 2023). Early techniques transmitted either raw sensory data (known as early fusion) or perception outputs (known as late fusion). However, recent studies have been studying the transmission of intermediate features to balance performance and bandwidth. To boost the research of multi-agent collaborative perception, V2X-Sim (Li et al., 2022c) and OPV2V (Xu et al., 2022c) generated high-quality simulation datasets, and DAIR-V2X (Yu et al., 2022) collects real-world data. To achieve an effective trade-off between perception performance and communication costs, Who2com (Liu et al., 2020b), When2com (Liu et al., 2020a) and Where2comm (Hu et al., 2022) select the most critical message to communicate. To resist pose errors, Vadivelu et al. (2021) and Lu et al. (2023) use learnable or mathematical methods to correct the pose errors. Collaborative perception can also directly help the driving planning and control task (Chen & Krähenbühl, 2022; Cui et al., 2022; Zhu et al., 2023) with more accurate perceptual results. Most papers assume that agents are given the same sensor modality and model, which is deemed impractical in the real world. A contemporaneous work solving agent’s modality heterogeneity is HM-ViT (Xiang et al., 2023), but it neglects the framework’s extensibility and requires retraining the whole model when adding new agent types. In this paper, we address the issues of heterogeneity and extensibility together. 2.2 MULTI-MODALITY FUSION In the field of 3D object detection, the fusion of LiDAR and camera data has demonstrated promising results (Chen et al., 2022a; Li et al., 2022d; Yang et al., 2022; Bai et al., 2022; Borse et al., 2023; Li et al., 2022b; Xu et al., 2022d). LiDAR-to-camera (Ma et al., 2019) methods project LiDAR points to camera planes. By using this technique, a sparse depth map can be generated and combined with image data. Camera-to-LiDAR (Vora et al., 2020; Li et al., 2022d) methods decorated LiDAR points with the color and texture information retrieved from images. Bird’s eye view (BEV) (Liu et al., 2022a; Li et al., 2022a; Borse et al., 2023; Chen et al., 2022b) provides a spatial unified representation for different modalities to perform feature fusion. As stated in (Xiang et al., 2023), in single-agent multi-modality settings, sensor types/numbers and their relative poses are fixed, with most LiDAR-camera fusion algorithms (Chen et al., 2022a; Li et al., 2022d; Yang et al., 2022) developed based on this fact. In contrast, heterogeneous multi-agent collaboration has random sensor positions and types, differing fundamentally from single-vehicle multi-modality fusion. However, the aforementioned BEV representation is a highly efficacious approach to facilitate spatial alignment among agents of diverse modalities and models. 3 OPEN HETEROGENEOUS COLLABORATIVE PERCEPTION Multi-agent collaborative perception allows a group of agents to collectively perceive the whole environment, exchanging complementary perceptual information with each other. Within this domain, open heterogeneous collaborative perception considers the scenario where new agent types with unseen sensor modalities or perception models can be continually added to the existing collaborative system. Without loss of generality, consider $N$ homogeneous agents in the scene initially. These agents, uniformly equipped with identical sensors and perception models, have the capability to observe, communicate, and compute, thereby fostering a homogeneous collaborative network. Subsequently, some new types of agents with previously unseen sensor modalities or models, are introduced into the scene sequentially to join the collaboration. Such dynamism characterizes the attributes of deploying collaborative perception in the real world: agent types will not be fully determined at the beginning and the number of types is likely to increase over time. It highly differs from traditional heterogeneous frameworks, where agent types are predetermined and fixed. To tackle open heterogeneous collaborative perception, a straightforward approach is to conduct training for both existing and new agents collectively. However, this approach is computationally expensive with repetitive integrations over time. Therefore, an effective solution must balance two primary goals: i) minimizing the training overhead associated with each integration, and ii) maximizing perception performances across all agent types post-integration. The pursuit of these twin goals is fundamental to the successful implementation of open heterogeneous collaborative perception in diverse and dynamically changing real-world scenarios. 4 HETEROGENEOUS ALLIANCE (HEAL) Figure 2: Overview of HEAL. (i) We train the initial homogeneous agents (collaboration base) with our novel Pyramid Fusion to establish a unified feature space; (ii) We leverage the well-trained Pyramid Fusion and detection head as the new agents’ detection back-end. With the back-end fixed, it pushes the encoder to align its features within the unified feature. This step is performed on the new agent type only, presenting extremely low training costs. (iii) New agents join the collaboration. To address open heterogeneous collaborative perception problem, we propose HEterogeneous ALliance (HEAL). It is an extensible framework to seamlessly integrate new agent types into the existing collaborative network with both minimal training overhead and optimal performance. As shown in Figure 2, HEAL includes two phases to realize a growing alliance: i) collaboration base training, which allows initial agents to collaborate at feature-level and create a unified feature space; and ii) new agent type training, which aligns new agents’ feature with the previously established unified feature space for collaboration. For every integration of a new agent type, only the second phase is required. We now elaborate on each training phase in the following subsections. 4.1 COLLABORATION BASE TRAINING In this phase, we designate the initial homogeneous agents as our collaboration base and train a feature-level collaborative perception network. Let $S_{[b]}$ be the set of $N$ agents with the same base agent type $b$. For the $i$th agent in the set $S_{[b]}$, we denote $O_i$ as its observation, $f_{\text{encoder}[b]}(\cdot)$ as its perception encoder and $B_i$ as its final detection output. Then, the collaborative perception network of the $i$th agent works as follows: $$F_i = f_{\text{encoder}[b]}(O_i), \quad i \in S_{[b]} \quad \triangleright \text{Feature Encoding} \quad (1a)$$ $$F_{j \rightarrow i} = \Gamma_{j \rightarrow i}(F_j), \quad j \in S_{[b]} \quad \triangleright \text{Message Transmission} \quad (1b)$$ $$H_i = f_{\text{pyramid\_fusion}}(\{F_{j \rightarrow i}\}_{j \in S_{[b]}}), \quad \triangleright \text{Feature Fusion} \quad (1c)$$ $$B_i = f_{\text{head}}(H_i), \quad \triangleright \text{Decoding Feature} \quad (1d)$$ where $F_i$ is the initial feature map from the encoder with BEV representation, $\Gamma_{j \rightarrow i}(\cdot)$ is an operator that transmits $j$th agent’s feature to the $i$th agent and performs spatial transformation, $F_{j \rightarrow i}$ is the spatially aligned BEV feature in $i$th’s coordinate (note that $F_{i \rightarrow i} = F_i$), $H_i$ is the fused feature and $B_i$ is the final detection output obtained by a detection head $f_{\text{head}}(\cdot)$. Figure 3: Pyramid Fusion uses multiscale and foreground-aware designs to fuse features and create a robust unified feature space. Foreground estimators produce foreground possibility maps at each BEV position. These foreground maps are then normalized to weights for feature summation. Foreground maps are subject to supervision during training. Blue and green represent different agents. To provide a well-established feature space for multi-agent collaboration, we propose a novel Pyramid Fusion $f_{\text{pyramid\_fusion}}(\cdot)$ in Step (1c), designed with multiscale and foreground-aware structure. Let $F^{(0)}_{j \rightarrow i}$ be $F_{j \rightarrow i}$, we elaborate $f_{\text{pyramid\_fusion}}(\cdot)$ in detail: $$ F^{(\ell)}_{j \rightarrow i} = R_\ell(F^{(\ell-1)}_{j \rightarrow i}), \quad j \in S_{[b]}, \text{ and } \ell = 1, 2, \cdots, L, \tag{2a} $$ $$ S^{(\ell)}_{j \rightarrow i} = H_\ell(F^{(\ell)}_{j \rightarrow i}), \quad j \in S_{[b]}, \text{ and } \ell = 1, 2, \cdots, L, \tag{2b} $$ $$ W^{(\ell)}_{j \rightarrow i} = \text{softmax}(S^{(\ell)}_{j \rightarrow i}), \quad j \in S_{[b]}, \text{ and } \ell = 1, 2, \cdots, L, \tag{2c} $$ $$ F^{(\ell)}_i = \sum\{F^{(\ell)}_{j \rightarrow i} * W^{(\ell)}_{j \rightarrow i}\}_{j \in S_{[b]}}, \tag{2d} $$ $$ H_i = \text{concat}([u_1(F^{(1)}_i), u_2(F^{(2)}_i), \cdots, u_L(F^{(L)}_i)]), \tag{2e} $$ where $\ell$ indicates the scale, $R_\ell(\cdot)$ is the $\ell$th ResNeXt [Xie et al., 2017] layer with a downsampling rate of 2, $F^{(\ell)}_{j \rightarrow i}$ is encoded features at $\ell$th scale; $H_\ell(\cdot)$ is $\ell$th foreground estimator that outputs the foreground map $S^{(\ell)}_{j \rightarrow i}$, measuring the possibility of foreground object at each BEV position; softmax($\cdot$) normalizes the foreground possibility to the weights for multi-agent feature fusion $W^{(\ell)}_{j \rightarrow i}$; $u_\ell(\cdot)$ is an upsampling operator for the $\ell$th scale. The proposed Pyramid Fusion facilitates the establishment of a unified feature space for multi-agent collaboration by harnessing two key designs: a multi-scale structure and foreground awareness. First, the multi-scale ResNeXt layers create a comprehensive unified feature space by fusing features at varying BEV scales. This not only promotes feature fusion in the collaboration base, but also ensures adaptability for future new agents, allowing their alignment to this unified feature space with coarse and fine grain. Furthermore, fusing at higher scales mitigates the discretization errors introduced by spatial transformation, thereby enhancing the robustness of multi-agent feature fusion and future alignment. Second, the foreground awareness design leverages foreground estimators to obtain foreground maps, which will guide Pyramid Fusion to select the perceptually critical features for fusion. It also enables the model to learn how to differentiate between the foreground and the background, leading to a more robust unified space. To train the collaborative perception model for the collaboration base, the overall loss is: $$ L = L_{\text{det}}(B_i, Y_i) + \sum_{\ell=1}^{L} \sum_{j=1}^{S_{[b]}} \alpha_\ell L_{\text{focal}}(S^{(\ell)}_{j \rightarrow i}, Y^{(\ell)}_{j \rightarrow i}). \tag{3} $$ The first term is detection supervision, where $L_{\text{det}}(\cdot)$ is the detection loss, including focal loss [Lin et al., 2017] for classification and Smooth-$L_1$ loss [Girshick, 2015] for regression, $Y_i$ is the ground-truth detections and $B_i$ is the detection output by our model. In addition to the supervision on the collaborative detection, we design the foreground map supervision at multiple BEV scales, where $L_{\text{focal}}$ refers to focal loss [Lin et al., 2017], $S^{(\ell)}_{j \rightarrow i}$ is the estimated foreground map from Step (2b). and \( Y_{\ell \rightarrow i}^{(\ell)} \) is the ground-truth BEV mask of foreground object for each agent. The hyperparameter \( \alpha_\ell \) controls the effect of foreground supervision at various scales. ### 4.2 New Agent Type Training Now we consider the integration of a new heterogeneous agent type, leveraging a novel backward alignment. The core idea is to utilize the Pyramid Fusion module and detection head from the previous phase as new agents’ single detection back-end and only update the front-end encoder module, prompting the encoder to generate features with the pre-established unified feature space. Specifically, we denote the new agent type as \( n_1 \) and new agent set as \( S[n_1] \), and the full set of current agents set becomes \( S = S[b] \cup S[n_1] \). For agent \( k \) in the agent set \( S[n_1] \), we define \( O_k \) as its observation and \( f_{\text{encoder}[n_1]}(\cdot) \) as its detector encoder. We keep \( f^*_{\text{pyramid\_fusion}}(\cdot) \) and \( f^*_{\text{head}}(\cdot) \) from previous stage unchanged, where \( * \) denotes fixed, and train \( f_{\text{encoder}[n_1]}(\cdot) \) on single agents: \[ F_k = f_{\text{encoder}[n_1]}(O_k), \quad k \in S[n_1]; \tag{4a} \] \[ F'_k = f^*_{\text{pyramid\_fusion}}(F_k), \quad k \in S[n_1]; \tag{4b} \] \[ B_k = f^*_{\text{head}}(F'_k), \quad k \in S[n_1]; \tag{4c} \] where \( F_k \) is the feature encoded from the new sensor and model, \( F'_k \) is the feature encoded by Pyramid Fusion module and \( B_k \) is the corresponding detections. Note that here we perform individual training for single agents; thus the input to the Pyramid Fusion is single-agent feature map \( F_k \) in (4b) instead of multi-agent feature maps \( \{F_{j \rightarrow i}\}_{j \in S[b]} \) as in [13]. With the pretrained Pyramid Fusion module and the detection head established as the back-end and fixed, the training process naturally evolves into adapting the front-end encoder \( f_{\text{encoder}[n_1]}(\cdot) \) to the back-end’s parameters, thereby enabling new agent types to align with unified space. Our backward alignment works well for two reasons. First, BEV representation offers a shared coordinate system for varied sensors and models. Second, with the intrinsic design of Pyramid Fusion, feature domain alignment can be conducted with high efficiency: i) The alignment is performed across multiple scales, capturing and bridging the plausible feature scale differences between different modalities and models. ii) The foreground estimators are also retained, thereby preserving effective supervision on the alignment with the most important foreground feature. In addition to enabling new and existing agents to collaborate at the feature level with robust performance, our backward alignment also shows a unique advantage: Training is only conducted on single agents of the new type. This significantly reduces the training costs of each integration, avoiding collecting spatio-temporal synchronized sensor data for multi-agent collective training and expensive retraining. Further, it prevents the new agents’ model details from disclosure and allows the owner of the new agents can use their own sensor data. It would remarkably address many privacy concerns automotive companies might have when deploying the V2V techniques. To supervise the training of new agent type with backward alignment, the loss is the same as Eq. (3), but we refer the detection bounding boxes \( B_i \) and ground-truth bounding boxes \( Y_i \) to belong to the single agents. The supervision on the confidence score at different scales is also preserved. ### 4.3 Heal during Inference Once the new agent of agent type \( n_1 \) has trained its encoder, all agents of new agent type \( n_1 \) can collaborate with base agents in the scene. Mathematically, for Agent \( i \) in the set \( S[b] \cup S[n_1] \), its feature after multi-agent fusion is obtained as \( H_i = f^*_{\text{pyramid\_fusion}}\left(\{\Gamma_{j \rightarrow i}(F_j)\}_{j \in S[b] \cup S[n_1]}\right) \). This is feasible because two training phases ensure that for all \( i, j \), \( F_j \) lies in the same feature space. Following this, we can continually integrate emerging heterogeneous agents into our alliance by revisiting the steps outlined in Sec. 4.2 creating a highly extensible and expansive heterogeneous alliance. Assuming there are a total of \( T \) new agent types, once training for each of these new agent types is completed, the collaboration among all heterogeneous agents can be written as: \[ H_i = f^*_{\text{pyramid\_fusion}}\left(\{\Gamma_{j \rightarrow i}(F_j)\}_{j \in S[b] \cup S[n_1] \cup S[n_2] \cup \cdots \cup S[n_T]}\right). \] Then, we can decode the feature and obtain the final detections \( B_i = f^*_{\text{head}}(H_i) \). 5 EXPERIMENTAL RESULTS 5.1 DATASETS **OPV2V-H.** We propose a simulation dataset dubbed OPV2V-H. In OPV2V (Xu et al., 2022c) dataset, the LiDAR with 64-channel shows a significant detection advantage over the camera modality. Evaluations of heterogeneous collaboration on OPV2V may not truly represent the collaboration performance between these two modalities since LiDAR agents can provide most detections (Xiang et al., 2023). For this purpose, we collected more data to bridge the gap between LiDAR and camera modalities, leading to the new OPV2V-H dataset. OPV2V-H dataset has on average approximately 3 agents with a minimum of 2 and a maximum of 7 in each frame. Except for one 64-channel LiDAR and four 4 RGB cameras (resolution 800*600) of each agent from original OPV2V dataset, OPV2V-H collects extra 16- and 32-channel LiDAR data and 4 depth camera data. **DAIR-V2X.** DAIR-V2X (Yu et al., 2022) is a real-world collaborative perception dataset. The dataset has 9K frames featuring one vehicle and one roadside unit (RSU), both equipped with a LiDAR and a 1920x1080 camera. RSU’ LiDAR is 300-channel while the vehicle’s is 40-channel. 5.2 OPEN HETEROGENEOUS SETTINGS We consider the sequential integration of new heterogeneous agents into the collaborative system to evaluate the collaborative performance and training costs of HEAL. We prepared four agent types, including 2 LiDAR models and 2 camera models; see Table. 1 | Agent Type | Agent Sensor and Model Setup | |------------|-----------------------------| | $L_P^{(64)}$ | LiDAR of x-channel, PointPillars (Lang et al., 2019) | | $C_E^{(384)}$ | Camera, resize img. to height $x_{px}$, Lift-Splat (Philon & Fidler, 2020) w. EfficientNet (Tan & Le, 2019) as img. encoder. | | $L_S^{(32)}$ | LiDAR of x-channel, SECOND (Yan et al., 2018) | | $C_R^{(336)}$ | Camera, resize img. to height $x_{px}$, Lift-Splat (Philon & Fidler, 2020) w. ResNet50 (He et al., 2016) as img. encoder. | **Implementation details.** We first adopt $L_P^{(64)}$ agents as the collaboration base to establish the unified feature space. And then select ($C_E^{(384)}, L_S^{(32)}, C_R^{(336)})$ to be new agent types. Both PointPillars (Lang et al., 2019), SECOND (Yan et al., 2018) and Lift-Splat-Shoot (Philon & Fidler, 2020) encode input data with grid size $[0.4m, 0.4m]$, and further downsample the feature map by $2\times$ and shrink the feature dimension to 64 with 3 ConvNeXt (Liu et al., 2022b) blocks for message sharing. The multi-scale feature dimension of Pyramid Fusion is $[64, 128, 256]$. The ResNeXt layers have [3,5,8] blocks each. Foreground estimators are $1 \times 1$ convolution with channel $[64, 128, 256]$. The hyper-parameter $\alpha_f = \{0.4, 0.2, 0.1\}_{f=1,2,3}$. We incorporated depth supervision for all camera detections to help convergence. We train the collaboration base and new agent types both for 25 epochs end-to-end with Adam, reducing the learning rate from 0.002 by 0.1 at epochs 15. Training costs 5 hours for the collaboration base on 2 RTX 3090 GPUs and 3 hours for each new agent’s training, while (Xiang et al., 2023) takes more than 1 day to converge with 4 agent types together. We adopt Average precision (AP) at different Intersection-over-Union (IoU) to measure the perception performance. The training range is $x \in [-102.4m, +102.4m], y \in [-51.2m, +51.2m]$ but we expand the range to $x \in [-204.8m, +204.8m], y \in [-102.4m, +102.4m]$ in evaluation for holistic view. Late Fusion aggregates all detected boxes from single agents. Considering most scenarios in the test set involve fewer than 4 agents, we initialized the scenario with one $L_P^{(64)}$ agent and progressively introduced $C_E^{(384)}, L_S^{(32)}, C_R^{(336)}$ agents into the scene for evaluation. 5.3 QUANTITATIVE RESULTS **Performance and training cost.** Table. 2 compares the detection performance and training cost. We see that HEAL surpassed all existing collaborative perception methods in perception performances while maintaining the lowest training cost. This is attributed to our powerful Pyramid Fusion structure and cost-efficient backward alignment design. Pyramid Fusion selects the most important feature for fusion in a multiscale manner and sustains a potent unified space. Backward alignment further reduces the training costs of integrating new agents remarkably. This step does not involve collective training, thus ensuring consistent and extremely low training costs. It presents significant advantages in model size, FLOPs, training time, and memory usage. In contrast, other baseline methods necessitate retraining all models at each integration with a computational complexity of Table 2: We evaluate the performance and training cost on OPV2V-H dataset when increasingly adding 3 new heterogeneous agents to the scene for collaboration (starting with one $L_P^{(64)}$ agent). Every method requires retraining the whole model except for ‘no fusion’, ‘late fusion’, and HEAL. Metrics related to the training cost are all measured with batchsize 1 on 1 RTX A40. Optimal values among intermediate fusion methods are bolded. Percentage comparison is made with HM-ViT. | Metric | AP50 ↑ | AP70 ↑ | Train Throughput (#Sec) ↑ | |--------|--------|--------|--------------------------| | Based on $L_P^{(64)}$, Add New Agent | +C$_E^{(384)}$ +L$_P^{(32)}$ +C$_R^{(336)}$ +C$_E^{(384)}$ +L$_P^{(32)}$ +C$_R^{(336)}$ +C$_E^{(384)}$ +L$_P^{(32)}$ +C$_R^{(336)}$ | | No Fusion | 0.748 | 0.748 | 0.606 | 0.606 | 0.606 | / | / | | Late Fusion | 0.775 | 0.833 | 0.834 | 0.599 | 0.685 | 0.685 | 3.29 | 5.75 | 4.75 | | F-Cooper [Chen et al., 2019] | 0.778 | 0.742 | 0.761 | 0.628 | 0.517 | 0.494 | 2.41 | 2.93 | 2.54 | | DiscoNet [Xie et al., 2021] | 0.798 | 0.833 | 0.830 | 0.653 | 0.682 | 0.695 | 2.37 | 2.87 | 2.47 | | AttFusion [Xu et al., 2022e] | 0.796 | 0.821 | 0.813 | 0.635 | 0.685 | 0.659 | 2.36 | 2.90 | 2.39 | | V2XViT [Xu et al., 2022b] | 0.822 | 0.888 | 0.882 | 0.655 | 0.765 | 0.753 | 1.45 | 1.58 | 1.45 | | CoBEVT [Xu et al., 2022c] | 0.829 | 0.885 | 0.885 | 0.671 | 0.742 | 0.755 | 1.49 | 2.07 | 1.99 | | HM-ViT [Xiang et al., 2023] | 0.813 | 0.871 | 0.876 | 0.666 | 0.743 | 0.755 | 1.22 | 1.33 | 1.18 | | HEAL | 0.826 | 0.892 | 0.894 | 0.726 | 0.812 | 0.813 | 3.27 | 5.44 | 4.59 (↑ 2.88×) | | Metric | Model Params (M) ↓ | FLOPs(T) ↓ | Peak Memory (GB) ↓ | |--------|---------------------|------------|--------------------| | Based on $L_P^{(64)}$, Add New Agent | +C$_E^{(384)}$ +L$_P^{(32)}$ +C$_R^{(336)}$ +C$_E^{(384)}$ +L$_P^{(32)}$ +C$_R^{(336)}$ +C$_E^{(384)}$ +L$_P^{(32)}$ +C$_R^{(336)}$ | | No Fusion | / | / | / | | Late Fusion | 20.25 | 6.34 | 7.15 | 0.149 | 0.074 | 0.091 | 3.70 | 2.11 | 2.72 | | F-Cooper [Chen et al., 2019] | 30.70 | 39.59 | 49.22 | 0.190 | 0.263 | 0.322 | 20.27 | 14.56 | 18.36 | | DiscoNet [Xie et al., 2021] | 30.80 | 39.67 | 49.29 | 0.194 | 0.270 | 0.332 | 19.92 | 17.10 | 15.95 | | AttFusion [Xu et al., 2022e] | 30.78 | 39.60 | 49.22 | 0.190 | 0.263 | 0.323 | 20.18 | 13.88 | 19.77 | | V2XViT [Xu et al., 2022b] | 36.17 | 44.99 | 54.62 | 0.315 | 0.352 | 0.393 | 25.92 | 21.25 | 21.00 | | CoBEVT [Xu et al., 2022c] | 35.84 | 42.05 | 48.80 | 0.313 | 0.340 | 0.321 | 20.94 | 20.16 | 26.29 | | HM-ViT [Xiang et al., 2023] | 47.71 | 65.08 | 83.34 | 0.236 | 0.331 | 0.442 | 35.53 | 26.55 | 26.88 | | HEAL | 20.25 | 6.34 | 7.15 (↓ 91.5%) | 0.149 | 0.074 | 0.091 (↓ 79.5%) | 3.71 | 2.15 | 2.76 (↓ 89.8%) | Table 3: Heterogeneous type agents are added in the order presented from left to right in each type combination. $\sum M.\#P.$ represents the accumulation of model parameters for collaboration base training and each integration. Percentage comparison is made with HM-ViT. HEAL holds the best performance and the lowest training cost under various agent type combinations. | Dataset | OPV2V-H (4 agents) | DAIR-V2X (2 agents) | |---------|---------------------|---------------------| | Agent Types | $L_P^{(32)}$ +C$_E^{(384)}$ +L$_P^{(32)}$ +C$_R^{(336)}$ +L$_P^{(40)}$ +C$_E^{(384)}$ +L$_P^{(40)}$ +C$_R^{(336)}$ +L$_P^{(40)}$ +C$_E^{(384)}$ +L$_P^{(40)}$ +C$_R^{(336)}$ | | Metric | AP50 ↑ | AP70 ↑ | ∑ M.\#P. ↓ | AP50 ↑ | AP70 ↑ | ∑ M.\#P. ↓ | AP50 ↑ | AP70 ↑ | ∑ M.\#P. ↓ | | No Fusion | 0.504 | 5.47 | 0.281 | 5.47 | 0.392 | 5.47 | 0.392 | 5.47 | 0.392 | | Late Fusion | 0.639 | 39.2 | 0.457 | 39.2 | 0.344 | 25.7 | 0.376 | 11.8 | 0.341 | | F-Cooper | 0.430 | 127.6 | 0.299 | 127.6 | 0.626 | 38.8 | 0.545 | 24.9 | 0.611 | | DiscoNet | 0.612 | 127.9 | 0.420 | 127.9 | 0.576 | 38.9 | 0.634 | 25.1 | 0.621 | | AttFusion | 0.571 | 127.7 | 0.394 | 127.7 | 0.649 | 38.8 | 0.661 | 24.7 | 0.623 | | V2XViT | 0.688 | 137.5 | 0.491 | 137.5 | 0.638 | 43.8 | 0.692 | 29.8 | 0.660 | | CoBEVT | 0.682 | 137.5 | 0.491 | 137.5 | 0.638 | 43.8 | 0.692 | 29.8 | 0.660 | | HM-ViT | 0.696 | 216.3 | 0.506 | 216.3 | 0.638 | 67.8 | 0.538 | 53.9 | 0.677 | | HEAL | 0.738 | 39.2 (↓ 81.9%) | 0.578 | 39.2 (↓ 81.9%) | 0.658 | 25.7 (↓ 62.1%) | 0.770 | 11.8 (↓ 78.1%) | 0.681 | 12.6 (↓ 77.1%) | O($m^2$) or even O($m^3$) [Xiang et al., 2023], where $m$ denotes the number of agent types. This limits their scalability, especially when there are numerous and growing types. Here when adding C$_R^{(336)}$ agent, HEAL outperforms previous SOTA HM-ViT by 7.6% in AP70 with only 8.3% parameters. Agent type combination. We present the final collaboration performance and accumulated training parameters with different agent type combinations on OPV2V-H and DAIR-V2X datasets in Table. 3. Specifically, we configure the LiDAR agent with lower channel numbers to show the performance on degraded LiDAR scenarios. Experiments show that no matter what kind of agent combination, HEAL always maintains the best performance and the lowest training cost. Imperfect localization. Existing experiments hold the assumption that each agent has an accurate pose. However, in real-world scenarios, due to the presence of localization noise, features between agents might not align precisely. Consequently, we introduce a robustness experiment against pose errors, adding Gaussian noise to the accurate pose, as depicted in Figure. 4. Experimental results show that HEAL retains state-of-the-art performance even under various pose error conditions. Feature compression. We use an autoencoder to reduce feature channels for bandwidth saving. Using well-trained HEAL and baseline methods, we finetune the new autoencoder. The compression ratio indicates the reduction in channels. Results in Figure. 4 demonstrate that even with a 32-fold compression, we still retain exceptionally high performance, surpassing baseline methods. Figure 4: Robust Experiment to pose error and compression ratio. Pose noise is set to $\mathcal{N}(0, \sigma_p^2)$ on x, y location and $\mathcal{N}(0, \sigma_r^2)$ on yaw angle. **Component ablation.** We carried out ablation experiments on HEAL’s components and design, as shown in Table 4. Results show that all of them are highly beneficial to collaboration performance. Through multiscale pyramid feature encoding at various scales, HEAL is capable of learning features at different resolutions and subsequently aligning new agent types via backward alignment. Further, the foreground supervision can help the HEAL distinguish the foreground from the background and select the most important features. These components help to construct a robust unified feature space and realize the alignment comprehensively. 5.4 Qualitative visualizations HEAL makes features align within the same domain via backward alignment in Figure 5 and achieves the best detection results in Figure 6. $L_1, C_1, L_2$ refer to $L_P^{(64)}, C_E^{(384)}, L_S^{(32)}$ respectively. ![Image](image.png) (a) $L_2$ Feature before Back. Align (b) $L_2$ Feature after Back. Align (c) Unified Feature Space from $L_1$ Figure 5: Visualization of HEAL’s backward alignment from $L_2$ to $L_1$’s unified space. ![Image](image.png) (a) HEAL ($L_1$) (b) HEAL ($L_1+C_1$) (c) HEAL ($L_1+C_1+L_2$) (d) HMViT ($L_1+C_1+L_2$) Figure 6: Visualization of open heterogeneous collaborative perception results on OPV2V-H. In (a)(b)(c), we show the process of gradually adding new agents to HEAL. The features of the added agent are displayed in the upper left corner. Note that the point cloud of the camera agent $C_1$ is only used to indicate the agent’s position. We color the predicted and GT boxes. 6 Conclusion This paper proposes HEAL, a novel framework for open heterogeneous collaborative perception. HEAL boasts exceptional collaborative performance, minimal training costs and model-detail protection. Experiments on our proposed OPV2V-H and DAIR-V2X datasets validate the efficiency of HEAL, offering a practical solution for extensible collaborative perception deployment in the real-world scenario. The limitation of HEAL is that it requires BEV features, which requires extra effort for some models (e.g., keypoint-based LiDAR detection) to be compatible to the framework. 7 Acknowledge This research is supported by the National Key R&D Program of China under Grant 2021ZD0112801, NSFC under Grant 62171276 and the Science and Technology Commission of Shanghai Municipal under Grant 21511100900, 22511106101 and 22DZ2229005. REFERENCES Xuyang Bai, Zeyu Hu, Xinge Zhu, Qingqiu Huang, Yilun Chen, Hongbo Fu, and Chiew-Lan Tai. Transfusion: Robust lidar-camera fusion for 3d object detection with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1090–1099, 2022. Shubhankar Borse, Marvin Klingner, Varun Ravi Kumar, Hong Cai, Abdulaziz Almuzairee, Senthil Yogamani, and Fatih Porikli. X-align: Cross-modal cross-view alignment for bird’s-eye-view segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3287–3297, 2023. Stefano Carpin. Fast and accurate map merging for multi-robot systems. Autonomous robots, 25: 305–316, 2008. Dian Chen and Philipp Krähenbühl. Learning from all vehicles. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17222–17231, 2022. Qi Chen, Xu Ma, Sihai Tang, Jingda Guo, Qing Yang, and Song Fu. F-cooper: Feature based cooperative perception for autonomous vehicle edge computing system using 3d point clouds. In Proceedings of the 4th ACM/IEEE Symposium on Edge Computing, pp. 88–100, 2019. Zehui Chen, Zhenyu Li, Shiquan Zhang, Liangji Fang, Qinhong Jiang, and Feng Zhao. Autoalignv2: Deformable feature aggregation for dynamic multi-modal 3d object detection. arXiv preprint arXiv:2207.10316, 2022a. Zehui Chen, Zhenyu Li, Shiquan Zhang, Liangji Fang, Qinhong Jiang, and Feng Zhao. Bevdistill: Cross-modal bev distillation for multi-view 3d object detection. arXiv preprint arXiv:2211.09386, 2022b. Jiaxun Cui, Hang Qiu, Dian Chen, Peter Stone, and Yuke Zhu. Coopernaut: End-to-end driving with cooperative perception for networked vehicles. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17252–17262, 2022. Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. Carla: An open urban driving simulator. In Conference on robot learning, pp. 1–16. PMLR, 2017. Ross Girshick. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 1440–1448, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Yue Hu, Shaoheng Fang, Zixing Lei, Yiqi Zhong, and Siheng Chen. Where2comm: Communication-efficient collaborative perception via spatial confidence maps. Advances in neural information processing systems, 35:4874–4886, 2022. Yue Hu, Yifan Lu, Runsheng Xu, Weidi Xie, Siheng Chen, and Yanfeng Wang. Collaboration helps camera overtake lidar in 3d detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9243–9252, 2023. Alex H Lang, Sourabh Vora, Holger Caesar, Lubing Zhou, Jiong Yang, and Oscar Beijbom. Pointpillars: Fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12697–12705, 2019. Zixing Lei, Shunli Ren, Yue Hu, Wenjun Zhang, and Siheng Chen. Latency-aware collaborative perception. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXII, pp. 316–332. Springer, 2022. Xin Li, Botian Shi, Yuenan Hou, Xingjiao Wu, Tianlong Ma, Yikang Li, and Liang He. Homogeneous multi-modal feature fusion and interaction for 3d object detection. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXVIII, pp. 691–707. Springer, 2022a.
RDSj6S8WJe
Hierarchical Structures in Real-world Scenarios: With the proposed dynamics aggregation framework depending heavily on the hierarchical structure of problems, how feasible is it to identify or establish such hierarchies in complex, real-world scenarios, where the state dynamics might be more intricate and less structured?
DEMYSTIFYING LINEAR MDPs AND NOVEL DYNAMICS AGGREGATION FRAMEWORK Joongkyu Lee Graduate School of Data Science Seoul National University jklee0717@snu.ac.kr Min-hwan Oh Graduate School of Data Science Seoul National University minoh@snu.ac.kr ABSTRACT In this work, we prove that, in linear MDPs, the feature dimension $d$ is lower bounded by $S/U$ in order to aptly represent transition probabilities, where $S$ is the size of the state space and $U$ is the maximum size of directly reachable states. Hence, $d$ can still scale with $S$ depending on the direct reachability of the environment. To address this limitation of linear MDPs, we propose a novel structural aggregation framework based on dynamics, named as the dynamics aggregation. For this newly proposed framework, we design a provably efficient hierarchical reinforcement learning algorithm in linear function approximation that leverages aggregated sub-structures. Our proposed algorithm exhibits statistical efficiency, achieving a regret of $\tilde{O}(d_{\psi}^{3/2}H^{3/2}\sqrt{NT})$, where $d_{\psi}$ represents the feature dimension of aggregated subMDPs and $N$ signifies the number of aggregated subMDPs. We establish that the condition $d_{\psi}^3 N \ll d^3$ is readily met in most real-world environments with hierarchical structures, enabling a substantial improvement in the regret bound compared to LSVI-UCB, which enjoys a regret of $\tilde{O}(d^{3/2}H^{3/2}\sqrt{T})$ (Jin et al., 2020). To the best of our knowledge, this work presents the first HRL algorithm with linear function approximation that offers provable guarantees. 1 INTRODUCTION Recent theoretical research in reinforcement learning (RL) has seen a surge in studies focusing on function approximation. Such a research direction seeks to address the generalization problem faced in tabular Markov Decision Processes (MDPs) (Jiang et al., 2017; Yang & Wang, 2019, 2020; Jin et al., 2020; Zanette et al., 2020; Modi et al., 2020; Du et al., 2020; Cai et al., 2020; Ayoub et al., 2020; Wang et al., 2020; Weisz et al., 2021; He et al., 2021; Zhou et al., 2021a,b; Ishaq et al., 2021; Hu et al., 2022). The linear MDP (Bradtke & Barto, 1996; Jin et al., 2020) serves as a foundational model for function approximation, modeling the transition probability as $P(s' | s, a) = \phi(s, a)^T \mu(s')$ with known features $\phi \in \mathbb{R}^d$ and unknown measures $\{\mu(s')\}_{s' \in S}$. Numerous prior studies have demonstrated regret bounds that are not dependent on the size of the state space $S$ (or the action space size $A$), but instead on the feature dimension $d$. (Jin et al., 2020; Zanette et al., 2020; Du et al., 2020; Cai et al., 2020; Weisz et al., 2021; He et al., 2021; Zhou et al., 2021a,b; Ishaq et al., 2021). Consequently, many of these algorithms proposed for linear MDPs are proven to achieve regret bounds independent of the size of the state space, and depend only on the intrinsic complexity measure of the feature space, $d$, once the parameterization is applied. However, whether such replacement of the state space dependence with the dependence on feature dimension $d$ induces learning independently of the state space entirely for all MDPs still requires an investigation. Hence, we pose a critical research question: Q1: Does the linear MDP invariably yield regrets that are independent of the state space size $S$? In this paper, we rigorously investigate the conditions under which linear MDPs induce learning that is independent of the state space and the conditions under which they do not. Our findings, as detailed in Section 4, prove that the feature dimension $d$ is lower bounded by $S/U$ in order to aptly represent the probability space, where $U$ is the maximum size of directly reachable states (see Definition 2). Thus, if the cardinality of directly reachable states does not grow with the entirety of the state space, that is, $U = o(S)$ — a condition that holds true in most real-world situations and becomes more pronounced as $S$ expands — the feature dimension $d$ has to grow proportionally with $S$ to properly encode the probability distribution over next states. Hence, unless the size of reachable states scales with the entire state space, regret bounds under linear MDPs still implicitly have $S$ dependence through the dependence of $d$ on $S$. To the best of our knowledge, our study presents the first comprehensive exposition of the fundamental limitations of the linear MDP, particularly its intrinsic dependence on state space. Our results on the limitations of linear MDPs suggest that simply because function approximation is employed, it may not necessarily enable efficient learning where the feature dimension $d$ is independent of the state space. However, should additional structures, such as hierarchies, be present within linear MDPs — facilitating the decomposition of the MDP into smaller sub-problems — it paves the way for the development of a refined framework, possibly enabling efficient learning. Ideally, a well-constructed learning algorithm should then leverage such structures for more efficient learning. Yet, to the best of our knowledge, there is no existing model or algorithm for hierarchical reinforcement learning (HRL) with function approximation that provides regret guarantees. Therefore, the following research question arises: **Q2:** Can we formulate a new hierarchical framework for linear MDPs that enables provably efficient learning independent of state space? To answer this question, we first introduce the framework of **dynamics aggregation**, which clusters similar sub-structures based on their dynamics of MDPs. Notably, this concept not only includes the extensively studied notion of state aggregation (or state abstraction) (Singh et al., 1994; Van Roy, 2006; Li et al., 2006; Abel et al., 2020; Dong et al., 2019) but also integrates the equivalence mapping proposed in Wen et al. (2020). A key benefit of dynamics aggregation lies in its reusability for similar problems. This new notion of aggregation not only allows efficient learning in technical perspectives but also is very natural in practical perspectives. Then, we propose **linear transition models for aggregated subMDPs**, a generalized approach that extends both non-hierarchical linear MDPs (Jiang et al., 2017; Jin et al., 2020) and tabular MDPs with equivalent subMDPs (Wen et al., 2020). Under this newly proposed model, we design a model-based HRL algorithm that leverages the hierarchical structure of MDPs and employs optimistic planning. This algorithm is provably efficient and, to our knowledge, is the first HRL algorithm that offers provable guarantees with function approximation. In numerical experiments, our proposed method consistently outperforms existing algorithms by significant margins. Our main contributions can be summarized as follows: - We establish that the feature dimension $d$ is lower bounded by $S/U$, where $U$ represents the maximum size of directly reachable states (Theorem 1). We also provide examples of various environments where $d$ does not scale with $S$. Consequently, in such scenarios, the regret bound can indeed depend on the size of the state space $S$ despite function approximation. To the best of our knowledge, this is the first work to provide a rigorous proof showing how the feature dimension $d$ relates to the state space size $S$ in linear MDPs. We strongly believe that this finding provides significant implications and will be of independent interest to the broader RL community. - To address this fundamental issue of the vanilla linear MDP framework, we introduce a new comprehensive framework of **dynamics aggregation**, encompassing both state aggregation and equivalence mapping (Wen et al., 2020). One of the key benefits of this framework lies in its inherent ability to be reused for similar sub-problems. - Under this newly proposed framework, we present a statistically efficient algorithm that exploits the hierarchical structure of the problem, thereby reducing dependency on the size of the entire state space. Then, we establish a regret bound of $\tilde{O}(d_{\psi}^{3/2}H^{3/2}\sqrt{NT} + TH\epsilon_p)$ (Theorem 2), where $d_{\psi}$ represents the feature dimension of aggregated subMDPs, $N$ denotes the number of aggregated subMDPs, and $\epsilon_p$ is the aggregation error. If an MDP adheres to the conditions of Corollary 1 (a common circumstance) and exhibits a hierarchical structure, the condition $d_{\psi}^3 N \ll d^3$ can be readily fulfilled, dramatically reducing the regret upper bound compared to LSVI–UCB (Jin et al., 2020), which enjoys a regret of $\tilde{O}(d^{3/2}H^{3/2}\sqrt{T})$. --- 1It is important to note that our results do not contradict the previously known $S$-independent regret bounds of the algorithms for linear MDPs (Jin et al., 2020). Rather, we focus on the representation ability of linear MDPs and its feature dimension $d$'s potential dependence on $S$. • We also conduct numerical experiments in environments with suitable hierarchical structures and show that our proposed framework enables our algorithm to leverage the structure and consistently outperform the existing RL algorithms with provable guarantees. 2 RELATED WORK Reinforcement Learning with Linear Function Approximation. In recent years, there has been a surge in research on function approximation with provable guarantees (Jiang et al., 2017; Yang & Wang, 2019, 2020; Jin et al., 2020; Zanette et al., 2020; Modi et al., 2020; Du et al., 2020; Cai et al., 2020; Ayoub et al., 2020; Wang et al., 2020; Weisz et al., 2021; He et al., 2021; Zhou et al., 2021a,b; Ishaq et al., 2021). All of these works assume certain linear structures of underlying MDP and appear to handle large state spaces effectively, as their regret scales only polynomially in \(d\) and not \(S\). However, it remains unclear how \(d\) is related to \(S\) in linear MDPs. In Theorem 1, we provide proof showing that \(d\) is lower bounded by \(S/U\), where \(U\) represents the maximum size of directly reachable states. And in Corollary 1, 2, and 3, we establish that \(d\) can be proportional to \(S\) in the majority of real-world environments. State Aggregation. The study of state aggregation (or state abstraction) in RL has a long and rich history, dating back to early works on approximating dynamic programs and the identification of states that exhibit similar behaviors (Fox, 1973; White, 1978; Bean et al., 1987; Dean & Givan, 1997; Bertsekas et al., 1988). In a similar vein, Li et al. (2006) introduced a unified framework for state aggregation in MDPs, examining the conditions under which such aggregations can preserve optimal behavior and affect the existing convergence guarantees of well-known RL algorithms. However, unlike our proposed dynamics aggregation which embraces a hierarchical structure, these past studies did not explicitly leverage this concept. Hierarchical Reinforcement Learning (HRL). Several studies have explored the decomposition of MDP into sub-problems (Dean & Lin, 1995; Singh & Cohn, 1997; Meuleau et al., 1998) and then solved independently under the weakly coupled resource constraints. The concept of HRL, which allows an agent to act and plan at various levels of temporal abstraction, was established by Sutton et al. (1999); Barto & Mahadevan (2003). However, there has been limited research quantifying the theoretical benefits of HRL. The work most closely related to ours is by Wen et al. (2020), who introduced a model-based tabular HRL algorithm designed to leverage repeating sub-structures. Nevertheless, their research focused solely on tabular MDPs when utilizing hierarchical structures, leaving the development of an efficacious HRL algorithm for linear MDPs an open question. 3 PROBLEM SETTING 3.1 NOTATIONS We denote by \([n]\) the set \(\{1, 2, \ldots, n\}\) for a positive integer \(n\). For a real-valued matrix \(A\), we use \(\|A\|_2 := \sup_{x: \|x\|_2 = 1} \|Ax\|_2\) to denote the maximum singular value of \(A\). With a positive definite matrix \(\Lambda\), we denote \(\|x\|_\Lambda^2 := x^\top \Lambda x\). We denote \(|\cdot|\) as the cardinality of a set. 3.2 INHOMOGENEOUS, EPISODIC MDPs We consider inhomogeneous episodic Markov decision processes (MDPs) denoted by \(\mathcal{M}(S, A, H, \{\mathbb{P}_h\}_{h=1}^H, \{r_h\}_{h=1}^H)\), where \(S\) is a measurable space, potentially with an infinite number of elements, and has a cardinality of \(S\), \(A\) is a finite set with cardinality \(A\), \(H \in \mathbb{Z}_+\) is the length of each episode, \(\mathbb{P}_h\) is the collection of transition probability distributions, and \(r_h\) is a reward function. We assume that every state is accessible from at least one other state, i.e. \(\forall s', \sum_{(s,a) \in S \times A} \mathbb{P}_h(s' | s, a) \geq 0\). In each episode, an initial state \(s_1\) is picked arbitrarily by an adversary. Then, for every \(h \in [H]\) in an episode, an agent takes action \(a_h \in A\) for state \(s_h \in S\) and receives reward \(r_h(s_h, a_h) \in [0, 1]\). Then, the next state \(s_{h+1}\) is drawn from the transition probability distribution \(\mathbb{P}_h(\cdot | s_h, a_h)\) and repeats its interactions until the end of the episode. If a specific state is not accessible, we can exclude it without losing generality. The agent aims to find a policy $\pi : S \times [H] \rightarrow A$ that maximizes its expected cumulative reward starting from every state $s$. We define the value function of policy $\pi$, $V^\pi_h : S \rightarrow \mathbb{R}$ as the expected sum of rewards under the policy $\pi$ until the end of the episode when starting from $s_h = s$, i.e., $$V^\pi_h(s) := \mathbb{E}_\pi \left[ \sum_{h'=h}^{H} r_{h'}(s_{h'}, \pi(s_{h'}, h')) \mid s_h = s \right].$$ We also denote the action-value function of policy $\pi$, $Q^\pi_h : S \times A \rightarrow \mathbb{R}$ as the expected sum of rewards when following $\pi$ starting from step $h$ until the end of the episode after taking action $a$ in state $s$; that is, $$Q^\pi_h(s, a) := r_h(s, a) + \mathbb{E}_\pi \left[ \sum_{h'=h+1}^{H} r_{h'}(s_{h'}, \pi(s_{h'}, h')) \mid s_h = s, a_h = a \right].$$ A policy $\pi^*$ is said to be an optimal policy if it achieves the maximal possible value at every state-step pair $(s, h) \in S \times [H]$. Then, we define the optimal value and action-value functions as $V^*_h(s) := \sup_\pi V^\pi_h(s)$ and $Q^*_h(s, a) := Q^{\pi^*}_h(s, a) = \sup_\pi Q^\pi_h(s, a)$. For a simple notation, by denoting $\mathbb{P}_h V_{h+1}(s, a) := \mathbb{E}_{s' \sim \mathbb{P}_h(\cdot|s, a)}[V_{h+1}(s')]$, both $Q^*$ and $Q^*_h$ can be conveniently written as the result of the Bellman equations as $Q^*_h(s, a) = (r_h + \mathbb{P}_h V^*_{h+1})(s, a)$ and $Q^*_h(s, a) = (r_h + \mathbb{P}_h V^*_{h+1})(s, a)$, where, for all $s \in S$, $V^*_{H+1}(s) = V^*_H(s) = 0$ and $V^*_h(s) = \max_{a \in A} Q^*_h(s, a)$. ### 4 LIMITATIONS OF LINEAR MDPs There exists a large amount of literature on function approximation in which linear MDPs serve as a foundational model (Yang & Wang, 2019; Jin et al., 2020; Zanette et al., 2020; Hu et al., 2022). Despite the growing body of research, the limitations associated with linear MDPs have not been adequately addressed. In this section, we provide a comprehensive analysis of inherent limitations in linear MDPs. First, linear transition models of linear MDPs are defined as follows: **Definition 1 (Linear transition model).** Let there exist a known feature map $\phi : S \times A \rightarrow \mathbb{R}^d$ and unknown $\mu_h : S \rightarrow \mathbb{R}^d$. Then, the transition operator $\mathbb{P}_h : S \times A \rightarrow \Delta(S)$ is defined as follows: for all $s, s' \in S, a \in A$, $\mathbb{P}_h(s' \mid s, a) = \phi(s, a)^T \mu_h(s')$. The linear structure of the transition probabilities offers the advantage of reducing the number of parameters that need to be estimated, subsequently decreasing the statistical and computational complexity of learning and planning algorithms. However, it is crucial to acknowledge that the set of MDPs that can be accurately represented using linear transition models with small $d$ relative to the size of the state space is notably limited. For a linear MDP, we generally expect that the transition kernel $\mathbb{P}_h(\cdot \mid \cdot, \cdot) \in \mathbb{R}^{S \times A \times S}$ has a low-dimensional structure, i.e., $d \ll S$. However, the following statements show that the feature dimension is closely related to the size of the state space, highlighting the inherent limitations associated with linear transition models. **Definition 2 (Directly reachable states).** For each $(s, a) \in S \times A$, “directly reachable states” of $(s, a)$ is defined to be the set of all states which can be reached by taking action $a$ in state $s$ within a single transition, $S_{s,a} := \{s' \in S : \mathbb{P}_h(s' \mid s, a) > 0\}$. Also, we denote $U := \max_{(s,a) \in S \times A} |S_{s,a}|$ to be the maximum size of directly reachable states. **Theorem 1.** For an MDP $M$ with a finite state space, the feature dimension $d$ is lower bonded by $$d \geq \lceil S/U \rceil,$$ where $U$ is the maximum size of directly reachable states (Definition 2). **Corollary 1.** If the maximum size of directly reachable states $U$ does not scale with the entire state space by a constant factor, i.e., $U = \Theta(S^p) < \infty$, where $0 \leq p < 1$, then $d \geq \Omega(S^{1-p})$. Theorem 1 and Corollary 1 imply that unless the size of directly reachable states (one-step transitable states) scales with the entire state space $S$ — a scenario rarely true in most real-world cases — the feature dimension $d$ would eventually scale with the size of the entire state space polynomially. Consequently, the learning efficiency (e.g., regret) would still depend on $S$ even when function approximation is employed. Furthermore, Theorem 1 can be generalized to an infinite (or even continuous) state space. **Corollary 2 (Infinite $S$ & finite $U$).** For an MDP $M$ with a state space that is either countably infinite or normed, compact, and uncountably infinite, and with a finite $U$, $d$ is infinite. Corollary 3 (Euclidean continuous state space). Consider an MDP $M$ with state space $S$ in the $p$-dimensional Euclidean space. Let $\text{Vol}(\cdot)$ represent the volume of a set. Denote the set of directly reachable states with the maximum volume as $U = \arg\max_{s,a} \text{Vol}(S_{s,a})$ and assume that $\text{Vol}(U) > 0$. Then, we have $d > 2^p \cdot \text{Vol}(S)/\text{Vol}(U) - 1$. One can observe that most real-world environments, as well as many simulation environments, have a small $U$ compared to the size of the state space. This implies that the statement in Corollary 1 is widely applicable and persuasive. Identifying environments that do not meet the condition of Corollary 1 is rather a challenging endeavor. The following examples, which are widely studied in the RL literature, fulfill the condition: Example 1 (Gridworld). In Figure 1(a), the agent is allowed to move to neighboring states (left, right, up, down, or stay in the same state), resulting in $U = 5$. Thus, by Theorem 1, $d \geq \lceil S/5 \rceil$. In a special case where the transitions are deterministic ($U = 1$), we get $d = S$. Example 2 (First-person navigation). In Figure 1(b), although the entire state space is extremely large, the agent can only move to the neighboring states, resulting in a constant $U$. Thus, $d \geq \Omega(S)$. Example 3 (Board games). Board games like Go, depicted in Figure 1(c), have an immense state space, approximately $10^{400}$, but the number of directly reachable states is relatively small, fewer than $19^2$. Hence, $d \geq 2.5 \times 10^{997}$. Example 4 (Control problems). The state spaces in control problems, as depicted in Figure 1(d), are continuous (uncountable). The volume of sets of directly reachable states is typically much smaller—especially in cases with minimal stochasticity—than the volume of the full state space. Therefore, $d \geq \Omega(\text{Vol}(S))$. To sum up, many existing studies that assume linear MDPs establish regret bounds that depend on the embedding dimension $d$ rather than the size of the state space $S$. However, in most practical environments, $d$ is often proportional to $S$. Consequently, it is crucial to take the state space size into account when employing linear MDPs in real-world applications, as the assumption of linear transition model may (and often does) fail to yield significant improvements in computational or statistical complexity. Motivated by these findings, in the following sections, we study approaches where additional structure may alleviate the limitations of vanilla linear MDPs. 5 HIERARCHICAL STRUCTURE In the context of MDPs, we introduce a notion of modularity (Wen et al., 2020), which divides a large problem into smaller ones. Modularity could be addressed separately and solved independently. Then, sub-problem solutions could be stitched together to solve the original problem. This approach can lead to statistically efficient learning if sub-problems are reasonably small and recurring. Definition 3 (Sub-problems, Wen et al., 2020). Assume that the state space $S$ is divided into $L$ disjoint subgroups $\{S^i\}_{i=1}^L$. Then, induced subMDPs $M^i(S^i \cup E^i, A, \{\mathbb{P}_h^i\}_{h=1}^H; \{r_h^i\}_{h=1}^H; E^i)$ are defined as: - The internal state set $S^i$ is disjoint subset of $S$ and the action space is still $A$. - The exit state set $E^i := \{e \in S \setminus S^i : \exists (s, a) \in S^i \times A \text{ s.t. } \mathbb{P}_i(e | s, a) > 0\}$. - The state space of $M^i$ is $S^i \cup E^i$. • The supports of \( p^i_h \) and \( r^i_h \) are restricted to \( S^i \times A \). • The subMDP \( M^i \) terminates once the agent reaches an exit state, i.e., \( s \in E^i \). Given a partition of \( M \), we examine the collection of induced subMDPs, represented as \( \{M^i\}_{i=1}^L \). If these sub-problems exhibit similarity or identical characteristics, it is possible to solve a single instance and apply the derived solution to other equivalent or analogous cases. 5.1 Hierarchical Structure via Dynamics Aggregation To formalize the hierarchical structure, we employ a concept of state aggregation (or abstraction) method that groups subMDPs exhibiting "behavioral equivalence" [Singh et al., 1994; Li et al., 2006; Wen & Van Roy, 2017; Dong et al., 2019]. Employing state aggregation leads to a reduction in state space size or complexity, thereby accelerating the learning process. Inspired by this concept, we propose a new concept called dynamics aggregation, which groups subMDPs based on the similarity of their dynamics. This approach involves dividing the set of states into \( N \) aggregated subMDPs, denoted by \( M^{(n)}(S^{(n)} \cup E^{(n)}, A, \{p^{(n)}_h\}_{h=1}^H, \{r^{(n)}_h\}_{h=1}^H, E^{(n)}) \) for \( n \in [N] \). By employing dynamics aggregation, we can efficiently learn and generalize across different sub-problems with similar dynamics, leading to more effective and faster learning. Formally, we can define an approximate dynamics aggregation as follows: **Definition 4 (Approximate dynamics aggregation).** For all \( h \in [H], i, j \in [L] \), let \( \psi^{i \rightarrow (n)}_h : S^i \cup E^i \rightarrow S^{(n)} \cup E^{(n)} \) be a mapping that maps the state space of \( i \)'th subMDP \( S^i \) to its corresponding aggregated state space \( S^{(n)} \), where \( n \in [N] \). Let \( \psi^{i \rightarrow (n)}_h \) and \( \psi^{j \rightarrow (n)}_h \) exist. Then, for all states \( s_1 \in S^i, s_2 \in S^j \) where \( \psi^{i \rightarrow (n)}_h(s_1) = \psi^{j \rightarrow (n)}_h(s_2) \) and all \( a \in A \), the following conditions hold: \[ | r^i_h(s_1, a) - r^j_h(s_2, a) | \leq \epsilon_r, \quad \| P^i_h \Psi^{i \rightarrow (n)}_h(\cdot | s_1, a) - P^j_h \Psi^{j \rightarrow (n)}_h(\cdot | s_2, a) \|_1 \leq \epsilon_p, \] where \( \epsilon_r, \epsilon_p \in \mathbb{R}^+ \cup \{0\} \), and \( \Psi^{i \rightarrow (n)}_h \in \mathbb{R}^{S \times S}, S = \sum_{i \in [L]} |S^i| = |S| \) and \( \bar{S} = \sum_{n \in [N]} |S^{(n)}| \), is a kernel that satisfying: \[ \Psi^{i \rightarrow (n)}_h(s', \bar{s'}) = I \left( s' \in S^i \cup E^i, \bar{s'} \in S^{(n)} \cup E^{(n)}, \psi^{i \rightarrow (n)}_h(s') = \bar{s'} \right), \] where \( I(\cdot) \) is an indicator function that maps to 1 when the condition is true, and 0 otherwise. Note that \( P^i_h \Psi^{i \rightarrow (n)}_h(\cdot | s, a) \) collapses the transition distribution over \( S^i \cup E^i \) into \( S^{(n)} \cup E^{(n)} \), and if the dynamics aggregation mapping is exact, then \( \epsilon_r = 0 \) and \( \epsilon_p = 0 \). Dynamics aggregation partitions the original MDPs into subMDPs and projects these subMDPs into aggregated subMDPs, while explicitly considering repeating structures (see Figure 2). This concept encompasses both state aggregation (refer Definition 3 in Li et al. [2006]) and equivalence mapping introduced by Wen et al. [2020]. It not only aggregates similar (usually neighboring) states like state aggregation but also aggregates subMDPs that have similar dynamics, akin to equivalence mapping. This methodology enables a significant simplification of the representation compared to the other two frameworks. For a more in-depth comparison with other existing frameworks, please refer to Section D in the Appendix. Intuitively, if all subMDPs are unique, i.e., there are no duplicate substructures, then \( N = L \). And if there are subMDPs have the similar dynamics to each other, i.e., some sub-structures are repeated, \( N < L \). Thus, we can expect dramatic improvements over the standard algorithms when MDP \( M \) has a hierarchical structure such that \( M \cdot N \ll S \), where \( M = \max_n |S^{(n)} \cup E^{(n)}| \). If \( M \) is small, the sizes of all aggregated subMDPs are small, making each subMDP relatively easy to solve. If \( N \) is small, a solution to one aggregated subMDP can be reused in other aggregated subMDPs. 5.2 Linear Transition Model under the Hierarchical Structure We assume that the transition probabilities of aggregated subMDP \( M^{(n)} \) are linear. **Assumption 1** (Linear transition models for aggregated subMDPs). Denote \( d_\psi \) as the feature dimension of aggregated subMDPs. For each \((\bar{s}, a) \in S^{(n)} \times A\), let known feature vector \( \phi(\bar{s}, a) \in \mathbb{R}^{d_\psi} \) be given as a prior. Then, for all \( n \in [N] \), there exist \( S \) unknown \( d \)-dimensional measures \( \mu_h^{(n)} = (\mu_h(1), \ldots, \mu_h(S)) \in \mathbb{R}^{d_\psi \times S} \), where \( S = \sum_{n \in [N]} |S^{(n)}| \), such that \[ P_h^{(n)}(\cdot \mid \bar{s}, a) = \phi(\bar{s}, a)^\top \mu_h^{(n)}(\cdot), \quad \forall \bar{s} \in S^{(n)}, \] where each columns of \( \mu_h^{(n)} \) corresponds to an unknown vector \( \mu_h^{(n)}(\bar{s}') \in \mathbb{R}^{d_\psi} \) for \( \forall \bar{s}' \in S^{(n)} \cup E^{(n)} \) and \( 0 \in \mathbb{R}^{d_\psi} \) for \( \forall \bar{s}' \notin S^{(n)} \cup E^{(n)} \). We make the following bounded assumptions, similar to existing literature (Yang & Wang, 2019; Jin et al., 2020): For all \( h \in [H] \) (i) \( \sup_{s,a} \| \phi(\psi_i^{(n)}(s), a) \|_2 \leq C_\phi \), and (ii) \( \| \mu_h^{(n)} v \|_2 \leq C_\mu \cdot \sqrt{d_\psi} \) for any vector \( v \in \mathbb{R}^S \) such that \( \| v \|_\infty \leq 1 \). We further assume that the reward function \( r \) is known for simplicity.\(^3\) Since we consider low-rank linear subMDPs, the dimension of the feature space \( d_\psi \) is upper-bounded by the cardinality of the image of the (linear) transition mapping, i.e., \( d_\psi \leq \max_n |S^{(n)} \cup E^{(n)}| = M \). When the aggregated state space is just a subset of the original state space with \( N = 1 \), implying the aggregated state space is just the original state space \( S \), this model reduces to classical non-hierarchical linear MDPs (Jiang et al., 2017; Jin et al., 2020). Furthermore, when the feature representation is a one-hot encoding, i.e., \( d_\psi = SA \), this model corresponds to tabular MDPs with equivalence mappings between subMDPs, as introduced by Wen et al. (2020). Thus, this model generalizes tabular MDPs with the hierarchical structure as well as non-hierarchical linear MDPs. Thanks to dynamics aggregation, we only need to learn \( \{\mu_h^{(n)}\}_{n=1}^N \) and can reuse them to solve similar sub-problems, highlighting the reusability as a key advantage of this approach. 6 Algorithm: UC-HRL The purpose of the algorithm is to learn the transitions for each aggregated subMDP, denoted by \( M^{(n)}(S^{(n)} \cup E^{(n)}, A, \{P_h^{(n)}\}_{h=1}^H, \{r_h^{(n)}\}_{h=1}^H, E^{(n)}) \). Let \( \psi_i^{(n)} : S^i \cup E^i \rightarrow S^{(n)} \cup E^{(n)} \) are known dynamics aggregations. To simplify the presentation, we denote \( \bar{s} = \psi_i^{(n)}(s) \). The indices \( i \) and \( n \) can be abbreviated as they are determined by the state \( s \). Specifically, \( i \) represents the index of the current subMDP to which the state \( s \) belongs, while \( n \) denotes the index of the aggregated subMDP that the current subMDP \( (i) \) is mapped to via an aggregation mapping. We can learn each transition \( P_h^{(n)}(\cdot \mid \bar{s}, a) = \phi(\bar{s}, a)^\top \mu_h^{(n)} \) by approximating \( \mu_h^{(n)} \) using data that has been collected so far. Denote \( \delta(\bar{s}) \in \mathbb{R}^S \) as a one-hot vector that has zero everywhere except that the entry corresponding to \( \bar{s} \) is one. For episode \( k \leq K \) and horizon \( h \leq H \), let \( e_k^{(n)} := P_h^{(n)}(\cdot \mid \bar{s}_{k,h}, a_{k,h})^\top - \delta(\bar{s}_{k,h+1}) \). Then, conditioned on history \( H_{k,h} \), all information from the beginning of the learning process up to and including \( (\bar{s}_{k,h}, a_{k,h}) \), we have \( \mathbb{E}[e_k^{(n)} \mid H_{k,h}] = 0 \) for \( n \in [N] \). This implies that \( \delta(\bar{s}_{k,h+1}) \) is an unbiased estimate of \( P_h^{(n)}(\cdot \mid \bar{s}_{k,h}, a_{k,h})^\top \) conditioned on \( (\bar{s}_{k,h}, a_{k,h}) \). Define a collection of \( (\bar{s}, a, \bar{s}') \) triplet interacted with any aggregated subMDP \( M^{(n)} \) until the end of episode \( k-1 \) as \[ D_{k,h}^{(n)} := \{(\bar{s}_{k',h}, a_{k',h}, \bar{s}_{k',h+1}) : \bar{s}_{k',h} \in S^i, \bar{s}_{k',h} = \psi_i^{(n)}(s_{k',h}) \}_{k'=1}^{k-1} \] Then, for all \( n \in [N] \), it is reasonable to learn \( \mu_h^{(n)} \) via the following ridge linear regression: \[ \hat{\mu}_h^{(n)} = \arg\min_\mu \sum_{(\bar{s}, a, \bar{s}') \in D_{k,h}^{(n)}} \| \phi(\bar{s}, a)^\top \mu - \delta(\bar{s}')^\top \|_2^2 + \lambda \| \mu \|_F^2. \] \(^3\)Note that we do not lose generality since learning \( r \) is much easier than learning \( P \). This assumption regarding \( r \) is typical in the literature on model-based RL (Yang & Wang, 2019, 2020; Ayoub et al., 2020; Zhou et al., 2021a). Algorithm 1 Upper Confidence Hierarchical RL with Transition-Targeted Regression (UC-HRL) 1: Inputs: \( M, K, \phi, N, \psi^{i_2(n)}_h, \beta, \lambda \) 2: Initialize: \( \Lambda^{(n)}_{1,h} = \lambda I \in \mathbb{R}^{d_\psi \times d_\psi}, \hat{\mu}^{(n)}_{1,h} = 0 \in \mathbb{R}^{d_\psi \times S}, D^{(n)}_{k,h} = \emptyset \). 3: for episode \( k = 1, 2, \cdots, K \) do 4: Set \( \{\hat{Q}^{(i)}_{k,h}\}_{i=1}^H \) as described in Eq. 2 using \( \hat{\mu}^{(n)}_k \). 5: for horizon \( h = 1, 2, \cdots, H \) do 6: \( a_{k,h} \leftarrow \arg\max_{a \in A} \hat{Q}^{(i)}_{k,h}(\psi^{i_2(n)}_h(s_{k,h}), a), \) where \( s_{k,h} \in S^i \) and \( \exists \psi^{i_2(n)}_h(s_{k,h}) \). 7: Play an action \( a_{k,h} \) and observe \( s_{k,h+1} \). 8: end for 9: Update \( D^{(n)}_{k+1,h} \) by Eq. 1. 10: \( \Lambda^{(n)}_{k+1,h} \leftarrow \lambda I + \sum_{(s,a,s') \in D^{(n)}_{k+1,h}} \phi(\bar{s}, a)\phi(\bar{s}, a)^\top \). 11: \( \hat{\mu}^{(n)}_{k+1,h} \leftarrow (\Lambda^{(n)}_{k+1,h})^{-1} \sum_{(s,a,s') \in D^{(n)}_{k+1,h}} \phi(\bar{s}, a)\delta(\bar{s}')^\top \). 12: end for In convention, in case of \( D^{(n)}_{k,h} = \emptyset \), the summation over \( D^{(n)}_{k,h} \) is zero. The full algorithm is summarized in Algorithm 1. For every episode \( k \), we form an UCB bonus term \( \beta \| \phi(\bar{s}, a) \| (\Lambda^{(n)}_{k,h})^{-1} \). With that, for \( s \in S, a \in A \) and \( h \in [H] \), we construct the optimistic aggregated Q-value functions. Definition 5 (Optimistic aggregated Q-values). For any \( (s, a) \in S \times A \) and \( h \in [H] \), let \( s \in S^i, \exists \psi^{i_2(n)}_h \), and \( \bar{s} = \psi^{i_2(n)}_h(s) \). Then, for all \( i \in [L] \), optimistic aggregated Q-values are defined as: \[ \hat{Q}^{(i)}_{k,h}(\bar{s}, a) := \min \left\{ r_h(\bar{s}, a) + \phi(\bar{s}, a)^\top \hat{\mu}^{(n)}_{k,h} \hat{V}^{(i)}_{h+1} + \beta \| \phi(\bar{s}, a) \| (\Lambda^{(n)}_{k,h})^{-1}, H \right\}, \] where \( \hat{V}^{(i)}_{h+1} \in \mathbb{R}^S \) such that \( \hat{V}^{(i)}_{h+1}(\psi^{i_2(n)}_h(s')) \) for \( s' \in S^i \), \( \hat{V}^{(j)}_{h+1}(\psi^{i_2(n)}_h(s')) \) for \( s' \in E^i \cap S^j \), and 0 otherwise. Note that \( \hat{V}^{(i)}_{k,H+1}(s) := 0 \) since the agent obtains no reward after \( H \)-th step. We also point out that for any states from different subMDPs \( s_1 \in S^i, s_2 \in S^j \) where \( \psi^{i_2(n)}_h(s_1) = \psi^{i_2(n)}_h(s_2) = \bar{s} \), the Q-value estimates can have different values, i.e., \( \hat{Q}^{(i)}_{k,h}(\bar{s}) \neq \hat{Q}^{(j)}_{k,h}(\bar{s}) \). Thus, the estimated Q-values in the original state space \( S \) are defined as \( \hat{Q}_{k,h}(s, a) := \hat{Q}^{(i)}_{k,h}(\psi^{i_2(n)}_h(s), a) \) for \( \forall (s, a) \in S^i \cup E^i \times A \). By choosing a proper value for \( \beta \), we can prove that, with high probability, the Q-value estimates are always optimistic estimates of the actual Q-values. Then, in each \( (h, k) \in H \times K \), the agent selects an action that maximized these Q-values \( \{\hat{Q}^{(i)}_{k,h}\}_{i=1}^H \). 7 REGRET ANALYSIS Theorem 2 (Regret upper bound). Let \( \pi = \{\pi_k\}_{k=1}^K \) be a collection of policies over \( K \) episodes and \( s_{k,1} \) be the initial state at episode \( k \). Denote \( d_\psi \) as the maximum rank of the transition kernels for aggregated subMDPs, and \( N \) as the number of aggregated subMDPs. Then, under Assumption 1, there exists an absolute constant \( C > 0 \) such that, for any fixed \( \delta \in (0, 1) \), if we set \( \beta = C \cdot d_\psi H \ln(2d_\psi T/\delta) \), then with probability at least \( 1 - \delta \), the regret of UC-HRL policy \( \pi \) is bounded by \[ \sum_{k=1}^K (V^*_1 - V^{\pi_k}_1)(s_{k,1}) = \tilde{O}(d_\psi^{3/2} H^{3/2} \sqrt{NT} + TH \epsilon_p). \] Discussion of Theorem 2. Theorem 2 implies that if \( \epsilon_p \) is sufficiently small — that is, if the aggregation mapping is precise enough — our algorithm enjoys favorable provable guarantees on regret performances. For example, if \( \epsilon_p = \tilde{O}(1/\sqrt{T}) \), the regret is still bounded by \( \tilde{O}(d_\psi^{3/2} H^{3/2} \sqrt{NT}) \). We show that the hierarchical structure can enable statistically more efficient learning compared to preceding algorithms that do not utilize the hierarchical structure. Specifically, if \( d_\psi N \ll d^3 \), the regret bound can be significantly improved compared to LSVI–UCB (Jin et al., 2020), which has a regret bound of \(d^{3/2}H^{3/2}\sqrt{T}\), where \(d\) represents the dimension of the feature vector in the original MDP \(\mathcal{M}\). Recall Theorem 1 and Corollary 1, which posit that in the majority of real-world environments, particularly those where the maximum number of directly reachable states \(U\) is not proportional to the state space size \(S\), the dimension of the feature vector \(d\) is lower bounded by \(S/U\). It’s always the case that \(d_\psi \leq M\), as we consider low-rank linear subMDPs. Hence, when \(MN \ll S\) (indicative of a hierarchical structure), \(M^2 \ll S^2/U^3\) (signifying a small number of directly reachable states compared to \(S\), a common scenario), the inequality \(d_\psi^3 N \ll d^3\) can be easily satisfied. We can show this by the following inequality: \(d_\psi^3 N \ll M^3 N \ll SM^2 \ll S^3/U^3 \ll d^3\). 8 Numerical Experiments We run our numerical experiments on Block-RiverSwim, a variant of RiverSwim (Strehl & Littman, 2008), which repeats the sub-structures called “Block” (refer Appendix H for detailed descriptions). Thus, if the agent can make use of the repeated sub-structures by re-using the learned solution to other blocks, it can learn the optimal policy efficiently. Baselines. We compare our algorithm to other provably efficient RL algorithms with linear function approximation: model-based algorithms such as UC-MatrixRL (Yang & Wang, 2020) and UCRL–VTR (Ayoub et al., 2020), and model-free algorithms such as LSVI–UCB (Jin et al., 2020) and LSVI–PHE (Ishfaq et al., 2021). We also included the results of UC–HRL (N=L), which is the variant of UC–HRL that naively learns the transition probabilities without the aggregation mappings, in order to directly verify the effect of leveraging hierarchical structure. Results. For a fair comparison, we sweep over the hyper-parameters for each algorithm over certain ranges. Figure 3 depicts learning curves over varying state sizes (and the number of blocks) for UC–HRL and other baseline algorithms. When the size of the state space is small and few sub-structures are repeated (e.g., \(L = 4, R = 2, S = 8\)), our algorithm, as well as other model-based algorithms perform relatively well. However, as the sub-structures repeat more (\(R\) increases), our algorithm learns the optimal policy far more quickly than the other algorithms. The results demonstrate that our proposed algorithm is not only provably but also experimentally efficient when the hierarchical structure is presented in the environment. 9 Conclusion In this work, we first show that in the majority of real-world environments, the regret can be dependent on the size of the state space \(S\) by showing that the dimension of features, \(d\), can be proportional to \(S\). To mitigate this issue, we formalize a hierarchical decomposition in aggregated state space and propose a UC–HRL that can significantly enhance the regret bound if repeated sub-structures are present. However, utilizing a known hierarchical structure is not the sole solution. We leave the exploration of other milder methods as a direction for future research. We anticipate that our research will serve as a pioneering study in rigorously highlighting the limitations of linear models and in enhancing the understanding of provably efficient hierarchical RL with function approximation. --- For the Euclidean continuous state space, \(d \geq \text{Vol}(S)/\text{Vol}(U)\). ACKNOWLEDGEMENTS This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2022R1C1C1006859 and RS-2023-00222663) and by Creative-Pioneering Researchers Program and AI-Bio Research Grant through Seoul National University. REFERENCES Yasin Abbasi-Yadkori, Dávid Pál, and Csaba Szepesvári. Improved algorithms for linear stochastic bandits. *Advances in neural information processing systems*, 24:2312–2320, 2011. David Abel, Nate Umbanhowar, Khimya Khetarpal, Dilip Arumugam, Doina Precup, and Michael Littman. Value preserving state-action abstractions. In *International Conference on Artificial Intelligence and Statistics*, pp. 1639–1650. PMLR, 2020. Alex Ayoub, Zeyu Jia, Csaba Szepesvari, Mengdi Wang, and Lin Yang. Model-based reinforcement learning with value-targeted regression. In *International Conference on Machine Learning*, pp. 463–474. PMLR, 2020. Andrew G Barto and Sridhar Mahadevan. Recent advances in hierarchical reinforcement learning. *Discrete event dynamic systems*, 13(1):41–77, 2003. James C Bean, John R Birge, and Robert L Smith. Aggregation in dynamic programming. *Operations Research*, 35(2):215–220, 1987. Dimitri P Bertsekas, David A Castanon, et al. Adaptive aggregation methods for infinite horizon dynamic programming. 1988. Steven J Bradtke and Andrew G Barto. Linear least-squares algorithms for temporal difference learning. *Machine learning*, 22(1):33–57, 1996. Qi Cai, Zhuoran Yang, Chi Jin, and Zhaoran Wang. Provably efficient exploration in policy optimization. In *International Conference on Machine Learning*, pp. 1283–1294. PMLR, 2020. Thomas Dean and Robert Givan. Model minimization in markov decision processes. In *AAAI/IAAI*, pp. 106–111, 1997. Thomas Dean and Shieh-Hong Lin. Decomposition techniques for planning in stochastic domains. In *IJCAI*, volume 2, pp. 3. Citeseer, 1995. Shi Dong, Benjamin Van Roy, and Zhengyuan Zhou. Provably efficient reinforcement learning with aggregated states. *arXiv preprint arXiv:1912.06366*, 2019. Simon S. Du, Sham M. Kakade, Ruosong Wang, and Lin F. Yang. Is a good representation sufficient for sample efficient reinforcement learning? In *8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020*, 2020. Bennett L Fox. Discretizing dynamic programs. *Journal of Optimization Theory and Applications*, 11:228–234, 1973. Ronan Fruit, Matteo Pirotta, Alessandro Lazaric, and Emma Brunskill. Regret minimization in mdps with options without prior knowledge. *Advances in Neural Information Processing Systems*, 30, 2017. Jiafan He, Dongruo Zhou, and Quanquan Gu. Logarithmic regret for reinforcement learning with linear function approximation. In *International Conference on Machine Learning*, pp. 4171–4180. PMLR, 2021. Pihe Hu, Yu Chen, and Longbo Huang. Nearly minimax optimal reinforcement learning with linear function approximation. In *International Conference on Machine Learning*, pp. 8971–9019. PMLR, 2022. Audrey Huang, Jinglin Chen, and Nan Jiang. Reinforcement learning in low-rank mdps with density features. In *International Conference on Machine Learning*, pp. 13710–13752. PMLR, 2023.
PuCno7nwgH
When building the hyper edges for the proposed model, the authors used secondary interaction between categorical features. What's the time and storage complexity for the proposed algorithm? Will it explode the system if there are a lot of categorical features available for the users and items?
Categorical Entity Features in Recommendation Systems Using Graph Neural Networks Anonymous authors Paper under double-blind review Abstract Graph neural networks are widely used in recommender engines and are commonly applied to user-item graphs augmented by various side information, including categorical entity features. It is established that a user selection process involves a complex framework of preferences and the importance of presented alternatives. For example, user’s preferences might change depending on product category and/or brand. Thus, comprehending and modeling them effectively is essential in the recommender engines’ context. Despite the significant influence of such categorical features on the user decision-making process, these have been incorporated in graph models in various ways without giving a clear indication of which method is most suitable. We investigate the capabilities of graph neural networks to extract and model categorical attribute-specific preferences effectively by systematically comparing existing techniques and graph models. These include one-hot encoding-based node features, category-value nodes, and categories as hyperedges. In addition, we introduce a novel hyperedge-based method designed to leverage categorical features more effectively compared to current approaches. The proposed model, which has a simple architecture and combines neighborhood aggregation with hyperedge aggregations, outperforms many complex and sophisticated methods. In extensive experiments using three real-world datasets, we compare existing methods and demonstrate the advantage of our approach in terms of commonly used quality metrics for recommender engines. 1 Introduction E-commerce website users encounter the daunting challenge of sifting through an overwhelming number of products to find the right item. To address this issue, recommender system (RS) algorithms have been designed to understand user intentions and predict which items to shortlist. The central objective of these RS algorithms is to learn and extract user preferences effectively, enabling them to anticipate the next likely item of interest. This task poses significant difficulties as users’ decision-making process involves quantifying preferences and the importance of presented alternatives [Dyer & Sarin, 1979]. For instance, user price preferences are highly influenced by the brand or product category. The process of clicking on the next item is driven by a complex interplay of product attributes and user preferences. Thus, the importance of categorical features of entities is pivotal in effectively learning and modeling user preferences. Given that user-item interactions can be naturally represented as graph data, where nodes represent users/items and edges correspond to interactions like clicks or purchases, many authors have successfully used graph neural networks (GNN) for recommender engines [He et al., 2020; van den Berg et al., 2017; Li et al., 2023; Sun et al., 2020; Guo et al., 2021; Zheng et al., 2023; Liu et al., 2022; Hu et al., 2020; Li et al., 2021]. It is claimed that the advantage of GNN-based user-item recommender systems lies in their ability to incorporate information beyond user-item relations, including edges among users/items and diverse user and item features. Although GNNs have been adopted for RS, it is noteworthy that there is limited research dedicated to understanding how to incorporate categorical features best and its capability to extract user preferences from such characteristics effectively (e.g., price preferences, brand preference, or interaction of those two). In this paper, we investigate the role of categorical features in user-item recommender engines based on graph neural networks. We explore various techniques that are used to integrate categorical features of entities. Many papers include such information as binary encoded node features (Sun et al., 2020; Guo et al., 2021), or adding category value-nodes on graphs (Zheng et al., 2023; Liu et al., 2022; Hu et al., 2020; Li et al., 2021). However, authors usually do not explore or clarify why they selected a specific method. There are no definitive guidelines/studies on which approach is most suitable for integration with a particular GNN architecture and whether or not there are other ways to consider. Therefore, we examine existing practices from the literature and propose a new method - category values as hyperedges that demonstrate effective utilization of categorical features compared to current methods. Using hyperedges in recommender engines is not novel and has already been studied (Zhang et al., 2022; Wang et al., 2020; Xia et al., 2021). However, most of the research is focused on session-based recommender engines, where hyperedges are created by combining different attributes together (for example, all prices within sessions build a hyperedge). It is to be noted that our examination focuses on user-item recommender systems and does not extend to session-based recommender systems. In addition, we concentrate on how entities’ categorical features, e.g., users and/or items categorical features, can be effectively utilized and do not study context features, e.g., interactions categorical features. The main contributions of this paper are as follows: - Examination of categorical feature integration: We review the literature and examine how categorical features are integrated into the models. Furthermore, we extensively compare different techniques to find out how different methodologies impact model performance. - New architecture: We introduce a new approach where categorical features of entities are used directly as hyperedges in GNN-based user-item recommender engines. We demonstrate that even though our approach has a simple architecture, it surpasses the performance of more sophisticated methodologies. - Empirical comparison and validation: We conduct extensive experiments on three real-world datasets and show that the hyperedge approach outperforms other methodologies (e.g., category-value nodes and binary-encoded features). In addition, we benchmark our approach against state-of-art models. The findings suggest that hyperedges can effectively be used to extract user preferences that improve model accuracy. 2 RELATED WORK We discuss the related work on categorical features in recommender engines in general and specifically for GNN-based methods. 2.1 RECOMMENDER ENGINES USING CATEGORICAL FEATURES Early recommender systems used only user-item interaction data to generate new recommendations. In this context, categorical features were often considered in the pre and post-processing stages of recommendation generation (Mei et al., 2018; Sun et al., 2019). Several studies implemented item/user categories as pre and post-filters (Hwang et al., 2012; Panniello et al., 2009; Davidson et al., 2010; Baltrunas & Ricci, 2009; Wadhwa et al., 2020). For instance, Davidson et al. (2010), used categories as a post-processing step to further narrow down a subset of items for presentation to the users. Baltrunas & Ricci (2009) utilized contextual item information as a pre-processing step. Pre and post-filters were the first attempts to include additional information in recommender systems. Advancements in modeling recommendation engines have enabled the integration of categorical features in the learning process. In the context of user-item recommender engines, categorical features are either entity (user/item) specific or user-item interaction specific (Chen et al., 2019). User/item-specific attributes are called side information, for example, user age/gender, item category/brand. On the other hand, user-item interaction-specific features are called context (Meng et al., 2023; Adomavicius & Tuzhilin, 2015). Early studies have explored both context-aware and side information-aware recommender engines and suggested different methods to employ categorical features in the Figure 1: Illustration of three graph models incorporating categorical entity features. In the first graph, categories are considered as features of items by creating binary vectors encoding the categorical value. The second graph represents each categorical value as extra nodes. The graph on the right shows categories as hyperedges. Learning process. In the context of entity categorical features, early latent factor models have utilized them as auxiliary information, serving as sparse features to create a user/item side information matrix (Singh & Gordon [2008], Veloso et al. [2019], Pasricha & McAuley [2018]). Representation learning models also leverage user and item features to predict user-item connection (Maeng et al. [2022], Cheng et al. [2016], Covington et al. [2016]). These methodologies construct an input feature matrix using dense and sparse user/item features. For example, Dong et al. [Dong et al. (2017)] constructed user and item feature matrix for the movie lens datasets where item features contain 18 movie genre categories encoded as binary vectors. Similarly, it utilizes the user’s age, gender, and occupation. 2.2 Categorical Features in Graph Neural Networks In the absence of rich, distinctive input features for items and users, it is well-established to use the identity matrix of a node as an input features matrix, e.g., each node is described as one hot encoding vector and is unique for every other nodes (He et al. [2020], van den Berg et al. [2017], Li et al. [2023]). However, when relevant entity features exist, authors rely primarily on two methods. The first commonly used technique is constructing binary-encoded vectors to represent categorical values. These binary vectors then are used directly as input features, or they are concatenated with the identity matrix (Sun et al. [2020], Guo et al. [2021]). The latter is usually used when entities have insufficient unique features to differentiate users/items. The second method used is category values as nodes. Several studies have adopted this technique (Zheng et al. [2023], Liu et al. [2022], Hu et al. [2020], Li et al. [2021]). For example, Liu et al. [2022] created a use-item-attribute graph. Items were connected to attribute nodes, and user-attribute interest was extracted by an attribute-aware attention mechanism. Similarly, Zheng et al. [2023] included item categorical features (price and categories) as extra nodes on the graph. They designed a two-branch factorization machine to extract price preferences (Sun et al. [2019]). Li et al. [2021] utilized item attributes such as categories and location as nodes. The effectiveness of those methods is not very obvious. For example, some authors have pointed out the limitations of the binary-encoded category method (Zhang et al. [2022], Liu et al. [2022]). When included as one-hot encoded features, it becomes very sparse where only a few entries are non-zero, which can lead to learning unreliable parameters (Liu et al. [2022]). Similarly, creating category-value nodes and connecting them with item nodes might not directly extract user category preferences and dependences (Zhang et al. [2022]). Furthermore, there is a complex interdependence between the graph model used and the GNN architecture realizing the recommender engines. Various aggregation mechanisms for graphs and hypergraphs have been proposed. Moreover, approaches not only differ in their graph model for categorical features but also use various techniques, such as attention mechanisms, making it difficult Table 1: Summary of different methods: \(|V|\) is number of nodes, \(|E|\) is number of edges, M is initial feature vector size, K is number of all categorical values, \(C_u\) is number of user category features, \(C_i\) number of item category features. \(|V_u|\) number of user nodes, \(|V_i|\) number of item nodes. | Method | Order of Graph | Size of Graph | Features | |-------------------------------|----------------|----------------------------------------------------|---------------------------| | Without categorical features | \(|V|\) | \(|E|\) | \(|V| \times M\) | | Categories as binary features | \(|V|\) | \(|E|\) | \(|V| \times (M + K)\) | | Category value nodes | \(|V| + K\) | \(|E| + (|V_u| \times C_u + |V_i| \times C_i)\) | \((|V| + K) \times M\) | | Categories as Hyperedges | \(|V|\) | \(|E| + (|V_u| \times C_u + |V_i| \times C_i)\) | \(|V| \times M\) | to assess the impact of the representation of categorical features, although this is a crucial design decision. We briefly mention that some authors used categorical features as edge features, mostly in context-aware recommender engines (Wu et al., 2022). Other research papers (Guo et al., 2021) built dual graphs to incorporate attribute information, one for user-item interactions and one for the attributes. Another way, we suggest, categorical features can be utilized on user-item graphs is to use them directly as hyperedges. In graph theory, hyperedges are edges that connect any number of nodes simultaneously (Yadati et al., 2019; Huang & Yang, 2021). For example, two items can be linked via a hyperedge because they share the same brand and price level. The concept of hyperedges is not new, and many studies have used hypergraphs and hyperedges to model recommender engines (Zhang et al., 2022; Wang et al., 2020; Xia et al., 2021). However, most studies are limited to session-based recommendation engines, and most importantly, those studies create hyperedges based on combinations of itemID and/or attributes, e.g., they introduce category value nodes into the graph. For example, Zhang et al. (2022) proposed session-based recommender engines, where nodes are price, category, and items. Hyperedges then connect some combination of those nodes, e.g., all price nodes within the session. The main advantage of hyperedges is that it can naturally model high-order interactions, which is common in real-world scenarios and thus can be utilized to overcome the above-mentioned limitation. 3 Preliminaries 3.1 Graph and input formulation for different techniques Figure 1 depicts three discussed approaches for incorporating categorical features into GNN-based user-item recommender engines, e.g., binary encoding of categorical features, category value nodes, and categories as hyperedges. Below, we describe how a graph changes when adopting different methods. We define an undirected bipartite graph \(G = (V, E)\) with \(V\) consisting of user and item nodes \(V = V_u \cup V_i\). Edge set \(E\) contains interaction edges between user-item nodes \((u, i)\). Users and items have non-categorical feature vectors of size M associated. Let us assume that users and items have \(C_u\) and \(C_i\) categorical features, respectively. Finally, K is a number of all category values for both user and item. Table 1 summarizes how the order of the graph, size of the graph, and feature matrix transform with different methods. The order is defined as the number of nodes and size as the number of edges (Harris et al., 2008). We can observe that in the hyperedge method, the size of the graph increases by the number of nodes times category features without increasing the number of nodes or feature matrix. In general, the size of the graph increases by the number of hyperedges. In the case of binary-encoded categorical values, input features grow by the number of all category values. While for category values as nodes, both the graph’s size and the graph’s order increase, as does the feature matrix. Figure 2: An example of incorporating price level and product category features as hyperedges. As an input, we have a bipartite graph with two types of nodes (users, items), and items have two categorical attributes. For example, \(i_3\) has price level 1 and category value 1. On the bipartite graph, we have two types of aggregation. A simple GCN layer that aggregates neighboring information. The second is hyperedge aggregations. Finally, they are combined to make a final prediction. 4 METHODOLOGY In previous sections, we discussed existing methods and motivated our new approach and the concept underlying it. Here, we discuss its concrete realization and present a unified framework to compare the different methodologies. We adopted categories as hyperedge concept for studying price and product category dependency for ecommerce recommender engine. Figure 2 illustrates the proposed model architecture. Here, we have a standard undirected bipartite graph \(G = (V, E)\) with \(V\) consists of user and item nodes, \(u \in U, i \in I\). Items have two categorical features: \(p \in P\) and \(c \in C\) (\(p\) stands for the price level and \(c\) for the product category). Edge set \(E\) contains interaction edges \((u, i)\) and hyperedges for each category value \(h_c, h_p, h_{cp}\). Hyperedge construction is as follows: For every category value, one hyperedge is created. Then, all items that share the same category value are connected. Similarly, we create hyperedge for all users who interacted with items of the same category value. In addition, interactions hyperedges are constructed \(h_{cp}\) (e.g., price level=1 and product category='tablets' is one hyperedge). During the learning process, we have two types of aggregation on graphs. One is a standard graph convolutional layer [Kipf & Welling, 2017] to capture neighborhoods, and the second is a hyperedge aggregation. Finally, we combine these two aggregations and use them for the prediction. The Pseudo algorithm algorithm is shown in Algorithm 1. 4.1 ENCODER Below is the exact formulation of the encoding part of the model. As mentioned above, for neighborhood aggregation, we use the GCN layer. For the hyperedge aggregation, we adapt the UniSAGE aggregation [Huang & Yang, 2021] extending GraphSAGE [Hamilton et al., 2017] to hypergraphs. The exact node-level formulation for a node \(v\) is: \[ h_{v}^{l+1} = \sigma \left( W_n^l \sum_{u \in N(v) \cup \{v\}} \frac{1}{\sqrt{d_j d_i}} h_u^l \right) \parallel \left( W_h^l \left( h_v^l + \sum_{e \in E_v} h_e^l \right) \right), \] where the left term corresponds to the node-level formulation of GCN [Kipf & Welling, 2017], \(E_v\) is the set of hyperedges containing \(v\), \(h_e^l\) is the embedding of the hyperedge \(e\) obtained as Table 2: Statistics of the datasets | Datasets | #users | #items | #interaction | #price level | #category | |----------------|--------|--------|--------------|--------------|-----------| | Amazon Grocery | 8535 | 12906 | 145755 | 21 | 24 | | Amazon Tools | 17642 | 26087 | 291361 | 21 | 13 | | Yelp | 19301 | 17587 | 452931 | 4 | 83 | \[ h_e^l = \frac{1}{|\mathcal{E}|} \sum_{u \in e} h_u^l \cdot W_r^l, \quad W_h^l \] are learnable parameters for neighborhood and hyperedge aggregation, respectively, \( \sigma \) is a nonlinear activation function and \( || \) denotes concatenation. ### 4.2 Decoder and Loss Function To predict user preferences, we use the inner product of the final user and item representations. We combine this with the Bayesian Personalized Ranking (BPR) loss function (Rendle et al., 2009) to train the model. Combination of inner product and BPR loss is a well-established framework for training recommender engines (Yue et al., 2023; He et al., 2020; Liu et al., 2022; Wang et al., 2019; Li et al., 2021; Lin et al., 2022). The exact formulation of the decoder is as follows: \[ y_{ui} = z_u^T z_i \] where \( z_u, z_i \) are the final user item representation. This approach implies that the similarity of a user to an item is proportional to the dot product of their representation (Hamilton, 2020). BPR loss is a widely used method since it considers positive and negative user-items pairs. BPR encourages models to rank positive user-item interactions higher than negative user-item interactions. The precise formulation of the loss function is as follows: \[ L = \sum_{(u,i,j) \in O} -\ln(\sigma(s(u,i)) - \sigma(s(u,j))) + \lambda||\Theta||^2 \] Where \( O \) denotes the set of positive-negative sample pairs, representing user \( u \) with a positive item \( i \) and a negative item \( j \), \( \sigma \) denotes the sigmoid function, which maps the predicted scores to probabilities between 0 and 1, \( s(u,i), s(u,j) \) are predicted scores for positive and negative items, respectively, and \( \Theta \) represents the model parameters, where \( \lambda \) controls L2 regularization. ### 5 Experimentation #### 5.1 Experimental Settings **Research Questions:** In our study, we performed extensive experimentation to evaluate various approaches and answer the following research questions: - **RQ1** Do existing GNN-based user-item recommendation systems benefit from categorical entity (user/item) features? - **RQ2** What is the best way to incorporate categorical features in a graph model? - **RQ3** Can we develop GNN-based user-item recommender engines effectively using categorical features to improve their prediction accuracy? **Datasets:** To examine model performances, we use three real-world data sets: Yelp2018, Amazon Tools 5core, and Amazon Grocery 5core datasets. Table 2 depicts a summary of datasets. - Yelp2018 dataset is widely used for recommender engines. Here, restaurants are considered as items for which users have reviews. Price categories, e.g., how expensive the restaurant is, and restaurant subcategories are extracted. We follow the same approach as the PUP paper and use a 10-core setting, only keeping users and items with at least ten interactions. [https://www.yelp.com/dataset/] • Amazon Tools 5 core[7] is adapted. Subcategories and prices are used to create categorical features. Price buckets are created by grouping values within an interval of 5. Furthermore, subcategories are used to create category features. The same as above, we apply 10-core settings. • Amazon Grocery 5 core[8] similar to Amazon Tools dataset we use subcategories and prices. Price categories are created by grouping prices into 5-euro buckets. The first-level subcategories are used as categories. The same as the above 10-core setting is applied. For each dataset, we rank the interactions by timestamps. We then split consecutively 60/20/20 as training, validation, and testing datasets. We use 1:1 negative sampling, e.g., for every positive training edge, we create one negative sample. Item is considered negative if a user did not interact with it. **Evaluation Metrics:** To evaluate the model performances, we adapted two widely used evaluation metrics, Recall at K and Normalized Discounted Cumulative Gain (NDCG) at K position [He et al., 2015]. Recall@K measures how many items are in the top-K recommended items, while NDCG@K focuses on the quality of the ranking. NDCG@K takes into account the position in which item was recommended. We used 50 and 100 top-K ranks. The reported results are average values over the number of users. Furthermore, we run each method 10 times and mean values are reported in the tables. **Baselines:** To answer RQ1, we construct different variations of the same model where only the input is different, e.g., The categorical features are added either as binary encoded input features or we create category-value nodes on the graph or using them as hyperedges. In addition, one extra model is constructed as a complete baseline where no categorical features are included, only relying upon the user-item identity feature matrix as input features. We describe the exact model formulations. In the methodology chapter, we described in detail how $GCN_h$ is aggregated. For all other methods in RQ1, we use a simple GCN layer for the model encoding part, e.g., we use the first part of the equation 1 followed by the activation function. The model prediction and training process is identical to $GCN_h$, which is described in the decoder and loss function section. - $GCN_w \ F \in \mathbb{R}^{n \times n}$ input is the user-item identity feature matrix, where $n$ is the number of nodes. - $GCN_t \ F \in \mathbb{R}^{n+c \times n+c}$ considers categorical values as extra nodes on the graph. e.g., the size of the input matrix is increased by a number of categorical values. - $GCN_f \ F \in \mathbb{R}^{n \times n+c}$ adds categorical values in the feature matrix. - $GCN_h \ F \in \mathbb{R}^{n \times n}$ does not increase the size of the input features matrix but uses categorical features for hyperedge construction. In each dataset, there are two category features: price level and product category. For RQ1, we test three scenarios per dataset, e.g., only price level, only product category, and both together price level and category. Hence, we have nine different frameworks to compare in total. To test RQ3, we compare our hyperedge approach with the state-of-the-art models. The competitive models we picked are BPR-MF, A2-GCN, PUP, and CatGCN. All except BPR-MF are incorporating categorical features into the model learning process. - **BPR-MF** [Koren et al., 2009] is a classical matrix factorization method combined with Bayesian personalized ranking loss for optimization. It is only based on user-item interactions and ignores side information. - **A2 GCN** [Liu et al., 2022] is an attribute-aware recommender engine that incorporates categorical attributes as extra nodes in the graph. It uses an attention mechanism to model user preferences. - **PUP** [Zheng et al., 2023] is price aware recommender engine. This method considers categories as nodes and deploys a custom decoder to capture the global and local influence of prices and categories. [https://cseweb.ucsd.edu/~jmcauley/datasets/amazon_v2/](https://cseweb.ucsd.edu/~jmcauley/datasets/amazon_v2/) Table 3: Performance comparison with different approaches to include categorical features at K=50 | Dataset | Model | Price Recall@50 | Price nDCG@50 | Category Recall@50 | Category nDCG@50 | Price And Category Recall@50 | Price And Category nDCG@50 | |---------------|-----------|-----------------|---------------|--------------------|------------------|-------------------------------|----------------------------| | Amazon Grocery| GCN_w | 0.0745 | 0.0342 | 0.0745 | 0.0342 | 0.0745 | 0.0342 | | | GCN_n | 0.0769 | 0.0352 | 0.0751 | 0.0342 | 0.0782 | 0.0357 | | | GCN_f | 0.0728 | 0.0328 | 0.0720 | 0.0328 | 0.0700 | 0.0317 | | | GCN_h | **0.0802** | **0.0370** | **0.0813** | **0.0377** | **0.0822** | **0.0377** | | Amazon Tools | GCN_w | 0.0321 | 0.0139 | 0.0321 | 0.0139 | 0.0321 | 0.0139 | | | GCN_n | 0.0346 | 0.0150 | 0.0320 | 0.0139 | 0.0342 | 0.0149 | | | GCN_f | 0.0307 | 0.0132 | 0.0306 | 0.0131 | 0.0289 | 0.0124 | | | GCN_h | **0.0383** | **0.0164** | **0.0379** | **0.0165** | **0.0383** | **0.0166** | | Yelp | GCN_w | 0.2137 | 0.0984 | 0.2137 | 0.0984 | 0.2137 | 0.0984 | | | GCN_n | 0.2157 | **0.1001** | 0.2138 | 0.0983 | 0.2158 | **0.1003** | | | GCN_f | 0.2137 | 0.0983 | 0.2133 | 0.0979 | 0.2136 | 0.0980 | | | GCN_h | **0.2150** | **0.1001** | **0.2172** | **0.1011** | **0.2204** | **0.1024** | • CatGCN (Chen et al., 2023) approach uses item categorical side information to enrich initial user feature representation. CatGCN is implemented for user node classification tasks. We adopt this approach for link prediction tasks. To adapt this approach for the link prediction, we do as follows: we use items categorical features to enrich users’ initial representation. In the case of item features, we adopt the identity matrix. We then combine user and item features and pass them into GCN layers. The training process for the link prediction is identical to our hyperedge approach, e.g., we use the same decoder mechanism. Implementation Details: For all baselines, we used the publicly available original implementations with their default parameters. We set the maximum epoch for training to 200. For our hyperedge model, we did hyperparameter search for the learning rate in (0.1, 0.01, 0.001, 0.0001) and L2 normalization in (1e-10, 1e-8, 1e-5, 1e-4) using the BPR loss function. The embedding size is fixed for 64. Adam optimizer is used for the optimization. The training happens in full batch mode. We use a one-layer model and report average values over ten runs. 5.2 Performance Comparison RQ1 and RQ2 Table 3 shows model performances at top-K=50 position. In the table, we highlight in bold the best performances. There are several interesting observations. First, we see that adding categorical features to the model is not always beneficial. In all datasets GCN_w is better than GCN_f. This is contrary to the expectation that more features in the model the better. This does not necessarily mean that features are meaningless. Rather, it could be that the model cannot learn reliable parameters for sparse input features. Including categorical values as extra nodes is usually better than not including them at all. In 7 out of 9 scenarios, GCN_n is better than GCN_w. Furthermore, the results show that for almost all cases, including categorical features as nodes is superior to the binary-encoded method. The second research question focuses on identifying the best way to include categorical features. Our results suggest that including category features as hyperedges is always better than not including them at all, and by large, the hyperedge method outperforms other methods in almost all scenarios. Only in one case, GCN_n has better results than GCN_h. Furthermore, performance varies across different datasets, indicating that the efficacy of model selection is influenced by dataset structure. The results for top-K=100 can be found in the Appendix. We make similar observations as in top-K=50. This finding suggests that models generally do not necessarily and automatically benefit from categorical features. And it should be part of the model selection to decide how to integrate categorical features. Table 4: Performance comparison with competitive baselines | Datasets | Model | Recall@50 | Recall@100 | nDCG@50 | nDCG@100 | |----------------|---------|-----------|------------|---------|----------| | Amazon Grocery | BPR-MF | 0.0569 | 0.0834 | 0.0276 | 0.0337 | | | CatGCN | 0.0349 | 0.0607 | 0.0139 | 0.0199 | | | A2-GCN | 0.0510 | 0.0853 | 0.0212 | 0.0291 | | | PUP | 0.0745 | 0.1106 | 0.0340 | 0.0424 | | | GCNh | **0.0822**| **0.1209** | **0.0377**| **0.0467**| | Amazon Tools | BPR-MF | 0.0282 | 0.0443 | 0.0123 | 0.0160 | | | CatGCN | 0.0123 | 0.0232 | 0.0047 | 0.0073 | | | A2-GCN | 0.0236 | 0.0404 | 0.0097 | 0.0135 | | | PUP | 0.0321 | 0.0511 | 0.0140 | 0.0184 | | | GCNh | **0.0383**| **0.0609** | **0.0166**| **0.0218**| | Yelp | BPR-MF | 0.2123 | 0.3280 | 0.0999 | 0.1291 | | | CatGCN | 0.1054 | 0.1831 | 0.0462 | 0.0663 | | | A2-GCN | 0.1883 | 0.2979 | 0.0889 | 0.1167 | | | PUP | **0.2221**| **0.3417** | **0.1024**| **0.1326**| | | GCNh | 0.2204 | 0.3384 | **0.1024**| **0.1322**| 5.3 Performance Comparison RQ3 In RQ1 and RQ2, we were solely interested in understanding if there is any difference in how categorical features are included in GNNs. Hence, we used standard GCN approaches to compare various techniques. To answer RQ3, we further compare the hyperedge model with current state-of-the-art models. Table 4 summarizes the experiment results and shows that our approach is, by large, the most effective way to model categorical features with PUP having competitive results. The model performance of $GCNh$ is particularly strong in the Amazon Grocery and Amazon Tools dataset. The Amazon Grocery dataset $GCNh$ outperforms second-best results by 10 percent. In Amazon Tools, improvement is almost 18 percent compared to second-best results. In the Yelp dataset, our model has competitive performance. It is notable that in some cases even simple BPR-MF outperforms competitive baselines such as A2-GCN and CatGCN. The hyperedge model has the simplest architecture compared to A2-GCC, PUP, and CatGCN, which rely on attention mechanisms, customized decoder, or local and global embedding learnings. Still, our approach outperforms those methods and, in some cases, has a significant margin. 6 Conclusions and Future Work This research paper examined different methods to incorporate categorical features of entities into GNN-based user-item recommender engines. Extensive experimentation was conducted to compare traditional approaches, such as category-value nodes and binary-encoded category features, to category-value hyperedges, as well as using no categorical features at all. We tested in three datasets with three different scenarios (e.g., including only product category, price level, or both of them together). Our findings suggest that the hyperedge approach outperforms other techniques in all cases. Another interesting observation is that including categorical binary-encoded features makes the model almost always worse than not including them at all. Furthermore, we compared the hyperedge approach to competitive baselines such as PUP, A2-GCN, and CatGCN, which studied categorical features in GNN-based user-item recommender engines. By large, the findings demonstrate the superiority of the hyperedge approach. For future work, further investigation is needed into how model architecture influences the most effective method for incorporating categorical features. Moreover, we hope that our study will motivate other researchers to dive deep into GNNs’ ability to extract complex user preferences as well as category dependencies. REFERENCES Gediminas Adomavicius and Alexander Tuzhilin. Context-aware recommender systems. In Francesco Ricci, Lior Rokach, and Bracha Shapira (eds.), Recommender Systems Handbook, pp. 191–226. Springer, 2015. doi: 10.1007/978-1-4899-7637-6_6. URL https://doi.org/10.1007/978-1-4899-7637-6_6 Linas Baltrunas and Francesco Ricci. Context-based splitting of item ratings in collaborative filtering. In Lawrence D. Bergman, Alexander Tuzhilin, Robin D. Burke, Alexander Felfernig, and Lars Schmidt-Thieme (eds.), Proceedings of the 2009 ACM Conference on Recommender Systems, RecSys 2009, New York, NY, USA, October 23-25, 2009, pp. 245–248. ACM, 2009. doi: 10.1145/1639714.1639759. URL https://doi.org/10.1145/1639714.1639759 Weijian Chen, Fuli Feng, Qifan Wang, Xiangnan He, Chonggang Song, Guohui Ling, and Yongdong Zhang. Catgcn: Graph convolutional networks with categorical node features. IEEE Trans. Knowl. Data Eng., 35(4):3500–3511, 2023. doi: 10.1109/TKDE.2021.3133013. URL https://doi.org/10.1109/TKDE.2021.3133013 Wen-Hao Chen, Chin-Chi Hsu, Yi-An Lai, Vincent Liu, Mi-Yen Yeh, and Shou-De Lin. Attribute-aware recommender system based on collaborative filtering: Survey and classification. Frontiers Big Data, 2:49, 2019. doi: 10.3389/fdata.2019.00049. URL https://doi.org/10.3389/fdata.2019.00049 Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu, and Hemal Shah. Wide & deep learning for recommender systems. In Alexandros Karatzoglou, Balázs Hidasi, Domonkos Tikk, Oren Sar Shalom, Haggai Roitman, Bracha Shapira, and Lior Rokach (eds.), Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, DLRS@RecSys 2016, Boston, MA, USA, September 15, 2016, pp. 7–10. ACM, 2016. doi: 10.1145/2988450.2988454. URL https://doi.org/10.1145/2988450.2988454 Paul Covington, Jay Adams, and Emre Sargin. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems, RecSys ’16, pp. 191–198, New York, NY, USA, 2016. Association for Computing Machinery. ISBN 9781450340359. doi: 10.1145/2959100.2959190. URL https://doi.org/10.1145/2959100.2959190 James Davidson, Benjamin Liebald, Junning Liu, Palash Nandy, Taylor Van Vleet, Ullas Gargi, Sujoy Gupta, Yu He, Mike Lambert, Blake Livingston, and Dasarathi Sampath. The youtube video recommendation system. In Proceedings of the Fourth ACM Conference on Recommender Systems, RecSys ’10, pp. 293–296, New York, NY, USA, 2010. Association for Computing Machinery. ISBN 9781605589060. doi: 10.1145/1864708.1864770. URL https://doi.org/10.1145/1864708.1864770 Xin Dong, Lei Yu, Zhonghuo Wu, Yuxia Sun, Lingfeng Yuan, and Fangxi Zhang. A hybrid collaborative filtering model with deep structure for recommender systems. In Satinder Singh and Shaul Markovitch (eds.), Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, pp. 1309–1315. AAAI Press, 2017. URL http://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14676 James S Dyer and Rakesh K Sarin. Measurable multiattribute value functions. Operations research, 27(4):810–822, 1979. Wei Guo, Rong Su, Renhao Tan, Huifeng Guo, Yingxue Zhang, Zhirong Liu, Ruiming Tang, and Xiaojiang He. Dual graph enhanced embedding neural network for CTR prediction. In Feida Zhu, Beng Chin Ooi, and Chunyan Miao (eds.), KDD ’21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, Singapore, August 14-18, 2021, pp. 496–504. ACM, 2021. doi: 10.1145/3447548.3467384. URL https://doi.org/10.1145/3447548.3467384 William L. Hamilton. Graph Representation Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2020. doi:
3NXhwkZGjz
The proposed pseudo label consolidation method seems similar to HCL Huang et al. (2021) which also aggregates predictions from multiple models/hypotheses to regularize/generate the final pseudo labels. Please discuss the differences, advantages and disadvantages of HCL and the proposed method.
Source-Free Unsupervised Domain Adaptation with Hypothesis Consolidation of Prediction Rationale Anonymous authors Paper under double-blind review Abstract Source-Free Unsupervised Domain Adaptation (SFUDA) is a challenging task where a model needs to be adapted to a new domain without access to target domain labels or source domain data. The primary difficulty in this task is that the model’s predictions may be inaccurate, and using these inaccurate predictions for model adaptation can lead to misleading results. To address this issue, this paper proposes a novel approach that considers multiple prediction hypotheses for each sample and investigates the rationale behind each hypothesis. By consolidating these hypothesis rationales, we identify the most likely correct hypotheses, which we then use as a pseudo-labeled set to support a semi-supervised learning procedure for model adaptation. To achieve the optimal performance, we propose a three-step adaptation process: model pre-adaptation, hypothesis consolidation, and semi-supervised learning. Extensive experimental results demonstrate that our approach achieves state-of-the-art performance in the SFUDA task and can be easily integrated into existing approaches to improve their performance. 1 Introduction The success of deep learning models in visual tasks is largely dependent on whether the training and testing data share similar distributions [He et al., 2016; Liang et al., 2020b]. However, when the distribution of the testing data differs significantly from that of the training data, also known as domain shift, the performance of these models can decrease substantially [Tzeng et al., 2017; Peng et al., 2019]. To mitigate the effects of domain shift and reduce the need for data annotations, Unsupervised Domain Adaptation (UDA) techniques have been developed to transfer knowledge from annotated source domains to new but related target domains without requiring annotations in the target domain [Hoffman et al., 2018; Long et al., 2018; Dai et al., 2020; Feng et al., 2021; Mei et al., 2020]. However, most UDA-based methods rely on access to labeled source domain data during adaptation, such an access may not always be feasible due to privacy concerns. As a result, Source-Free Unsupervised Domain Adaptation (SFUDA) [Liang et al., 2020a; Yang et al., 2021b,a; Chen et al., 2022; Yang et al., 2022; Zhang et al., 2022; Karim et al., 2023] gains much attention recently, which only requires a pre-trained model from the source domain and unlabeled data from the target domain. The main challenge in SFUDA research is how to generate supervision solely from unlabeled data. The current approaches in SFUDA research primarily focus on either generating pseudo-labels [Liang et al., 2020a; Yang et al., 2021b,a; Litrico et al., 2023] or conducting unsupervised feature learning [Huang et al., 2021; Chen et al., 2022; Zhang et al., 2022; Karim et al., 2023; Litrico et al., 2023] to address this issue. To generate reliable pseudo-labels, existing methods [Liang et al., 2020a; Yang et al., 2021b,a] often utilize the distribution of the target domain data to refine the initial predictions from the source domain, i.e., via clustering [Liang et al., 2020a] or using the predictions of neighboring samples [Yang et al., 2021a; Litrico et al., 2023]. On the other hand, unsupervised feature learning, such as contrastive learning, is often employed as an auxiliary task to encourage the features to adapt to the target domain [Huang et al., 2021; Chen et al., 2022; Zhang et al., 2022; Karim et al., 2023; Litrico et al., 2023]. In our study, we propose a novel approach to tackle the challenge of SFUDA. Our strategy involves deferring the utilization of label predictions to update the model in the early stages and carefully selecting the most reliable predictions to construct a pseudo-labeled set. The key innovation of our approach lies in considering multiple prediction hypotheses for each sample, accommodating the possibility of multiple potential labels for each data point. We treat each label assignment as a hypothesis and delve into the rationale and supporting evidence behind each prediction. We utilize a representation derived from GradCAM \cite{Selvaraju2017} to encode the rationale for predicting an instance to a hypothetical label. Our methodology is inspired by the belief that assessing the correctness of a prediction can be more reliable by analyzing the reasoning behind a particular prediction, rather than solely relying on prediction probabilities. Subsequently, we develop a consolidation method to determine the most trustworthy hypothesis and utilize it as the labeled dataset in a semi-supervised learning framework. By employing this technique, we effectively transform the SFUDA problem into a conventional semi-supervised learning problem. Concretely, our approach consists of three key steps: model pre-adaptation, hypothesis consolidation, and semi-supervised learning. We have empirically observed that pre-adapting the model can enhance the effectiveness of the second step. To accomplish this, we introduce a straightforward objective that encourages prediction smoothness from the network. In the final step, we leverage the widely-used FixMatch \cite{Sohn2020} algorithm as our chosen semi-supervised learning method. Through extensive experimentation, we demonstrated the clear advantages of our approach over existing methods in the SFUDA domain and show that the proposed method can be easily integrated into existing approaches to bring improvement. 2 RELATED WORK UDA. Unsupervised domain adaptation aims to transfer knowledge learned from a labeled source domain to an unlabeled target domain. Various approaches have been proposed to address this task, including discrepancy minimization \cite{Tzeng2014,Ganin2015,Long2015}, adversarial learning \cite{Hoffman2018,Long2018,Tzeng2017,Vu2019}, and contrastive learning \cite{Dai2020,Kang2019}. Recently, self-training using labeled source data and pseudo-labeled target data has emerged as a prominent approach in unsupervised domain adaptation (UDA) research \cite{Feng2021,Mei2020,Xie2020,Yu2021,Zou2018}. However, these methods typically rely on access to the source data, making them inapplicable when source data is unavailable. SFUDA. Source-free unsupervised domain adaptation involves adapting a pre-trained model from a source domain to a target domain without access to source data+labels or target labels. Existing SFUDA methods can be broadly categorized into two classes: i) Label Refinement: Methods such as SHOT \cite{Liang2020a}, G-SFDA \cite{Yang2021b}, NRC \cite{Yang2021a}, and GPL \cite{Litrico2023} focus on refining pseudo labels. SHOT generates pseudo labels using centroids obtained in an unsupervised manner. G-SFDA, NRC, and GPL refine pseudo labels through consistent predictions and nearest neighbor knowledge aggregation from local neighboring samples. ii) Contrastive Feature Learning: Approaches like HCL \cite{Huang2021}, C-SFDA \cite{Karim2023}, AdaContrast \cite{Chen2022}, GPL \cite{Litrico2023}, and DaC \cite{Zhang2022}. HCL and C-SFDA use a contrastive loss similar to moco \cite{He2016}, where positive pairs consist of augmented query samples and negatives are other samples. AdaContrast and GPL exclude same-class negative pairs based on pseudo labels. DaC divides the target data into source-like and target-specific samples, computes source-like class centroids, and generates negative pairs using these centroids. These methods aim to tackle SFUDA by refining pseudo labels or leveraging contrastive feature learning, demonstrating the potential of different strategies in addressing the challenges of adapting models without access to labeled source data or target label. 3 METHOD In the source-free unsupervised domain adaptation (SFUDA) setting, only pretrained source models and unlabeled data in the target domain are given. The task is to adapt the model to the target domain by using unlabeled target data only. Our approach sequentially applies three steps as described in Sec. [3.1], Sec. [3.2] and Sec. [3.3]. Figure 1: The visualizations illustrate the GradCAM \cite{Selvaraju2017} for predicting the image to a specific class. In the right-half section, it can be observed that even though the prediction is incorrect, the obtained rationale (region highlighted in the GradCAM) based on the correct label remains reasonable and resembles the rationale of the corresponding class depicted in the left-half section. 3.1 Model Pre-adaptation via Encouraging Smooth Prediction The first step of our approach is to make an initial adaptation to reduce the domain gap. We empirically find such a step can be beneficial for the following steps. We develop a pre-adaptation strategy by encouraging a smooth prediction on the data manifold.\footnote{Other pre-adaptation approaches may also work, such as the method in \cite{Liang2020,Yang2022}, please refer to Sec.4.4 for more experimental evidence.} Specifically, we create a memory $Q \in \mathbb{R}^{N_q \times d}$ to store $N_q$ randomly sampled image features and update it after each batch training (we choose $N_q$ equals the number of target samples in the dataset). Then for each target sample $x_i$, we find the $z$-nearest neighbor $\mathcal{NN}(x_i)$ and $z$-samples $\mathcal{FN}(x_i)$ that are furthest to $x_i$ based on the Euclidean distance between the image feature of $x_i$ and features in $Q$ (we choose $z = 3$ in our implementation). Then we optimize the following objective: $$\mathcal{L}_{PA} = \mathcal{L}_{SM} + \lambda \mathcal{L}_{FAR} = \sum_{i=1}^{N_B} \sum_{x'_j \in \mathcal{NN}(x_i)} KL(p(x_i), p(x'_j)) + \lambda \sum_{i=1}^{N_B} \sum_{x'_j \in \mathcal{FN}(x_i)} p(x_i)^\top p(x'_j),$$ where KL represents Kullback-Leibler divergence and $p$ denotes the posterior probability predicted from the source model. $N_B$ is the number of samples within a mini-batch. The first term is used to ensure similar samples have similar predictions. However, using the first term alone may lead to a trivial solution that assigns identical prediction for every instance. Thus we use the second term to counter-act it as it ensures that the least similar samples should have divergent posterior probabilities, i.e., the inner product between posterior should close to zero. 3.2 Hypothesis Consolidation from Prediction Rationale After pre-adaptation, the model generally exhibits improved adaptation to the target domain. However, there may still be instances where the model produces incorrect predictions, making it challenging to rectify misclassifications solely based on predicted posterior probabilities. Therefore, in the second step, we explore a more robust methodology for analyzing predictions. We begin by considering multiple prediction hypotheses for each individual instance. Specifically, for each instance, we consider the top $\tilde{k}$ classes with the highest posterior probabilities as potential prediction hypotheses, denoted as $(x_i, y^h_{ik})$, $k \in \text{top } \tilde{k}$. In other words, we acknowledge the correct class label could exist within one of these top $\tilde{k}$ classes, even though we do not know which one. To further analyze each hypothesis $(x_i, y^h_{ik})$, we calculate the GradCAM \cite{Selvaraju2017} to identify the regions that contribute to supporting the prediction for $y^h_{ik}$, resulting in a representation... Figure 2: In our method, we generate multiple prediction hypotheses based on the posterior probability of the current model. An image \( I \) and its hypothetical label form a hypothesis, for example, \((I, y = \text{clock})\). For each hypothesis, GradCAM is calculated based on the hypothetical label, resulting in the corresponding rationale representation \( a \). Subsequently, we calculate the centroid for the rationale representation of each class. called the rationale representation \( a_{ik} \). This rationale representation encodes the evidence supporting the corresponding hypothesis. Drawing inspiration from prior work [Shu et al., 2022, 2023], we formally calculate \( a_{ik} \) using the following equation: \[ a_{ik} = \frac{1}{HW} \sum_{m=1}^{H} \sum_{n=1}^{W} \left( \frac{\partial \logit(y^h_{ik})}{\partial [\phi(x_i)]_{m,n}} \right)^T [\phi(x_i)]_{m,n} + [\phi(x_i)]_{m,n} \in \mathbb{R}^{d'} \] where \( \phi(x_i) \in \mathbb{R}^{H \times W \times d'} \) is the feature map of the last convolutional layer of the network with \( H \) height, \( W \) width, and \( d' \) channels. \( [\phi(x_i)]_{m,n} \in \mathbb{R}^{d'} \) is the feature vector located at the \((m, n)\)-th grid. \( \logit(y^h_{ik}) \) is the logit for class \( y^h_{ik} \), \([\cdot]_+ = \max(\cdot, 0)\). \( \left[ \frac{\partial \logit(y^h_{ik})}{\partial [\phi(x_i)]_{m,n}} \right]^T [\phi(x_i)]_{m,n} + [\phi(x_i)]_{m,n} \) is equivalent to GradCAM value at the \((m, n)\)-th grid. Essentially, the calculation of \( a_{ik} \) performs weighted average pooling over \( \phi(x_i) \) according to the GradCAM. Figure 1 shows the GradCAM calculated from different hypotheses for the same image. Upon observation, we notice that even if the ground-truth class is not ranked as the top prediction by the model, its associated rationale remains reasonable and similar to the common rationale patterns for the corresponding class. This inspires us to leverage this observation to analyze the model’s current predictions. For example, if an instance has a prediction hypothesis that exhibits a rationale similar to the corresponding class’s common rationale but is not ranked as the top prediction, then the top prediction may not be correct. Formally, we calculate the class-wise rationale centroid as the average rationale representation from each hypothetical class, representing the common rationale for each class: \[ \bar{a}_c = \frac{\sum_{ik} \mathbb{1}(y^h_{ik} = c)a_{ik}}{\sum_{ik} \mathbb{1}(y^h_{ik} = c)} \] where \( c \) represents a class and \( \mathbb{1}(y^h_{ik} = c) = 1 \) if \( k = c \). The idea of using multiple hypotheses with the rationale representation is illustrated in Figure 2. Next, we generate a ranking index \( r_{ik} \) for each prediction hypothesis \((x_i, a_{ik}, y^h_{ik})\) by ranking the Euclidean distance between \( a_{ik} \) and its corresponding rationale centroid \( \bar{a}_{y^h_{ik}} \), i.e., the centroid for class \( y^h_{ik} \), in the ascending order. For each instance \( x_i \), we obtain \( \tilde{k} \) ranking indices \( r_{ik}, k \in \text{top } \tilde{k} \) classes, one for each hypothesis. Then, a hypothesis \(\{x_i, y_{ik'}\}\) is considered reliable if it satisfies the following two conditions: (1) \( r_{ik'} < \tau_1 \), indicating the rationale for \(\{x_i, y_{ik'}\}\) is typical as its rationale representation is close to the rationale centroid. (2) \( r_{ij} > \tau_2 \forall j \neq k' \), where \( \tau_2 > \tau_1 \). Figure 3: These examples demonstrate the generation of reliable hypotheses. In Case 1, the rank ID of the second hypothesis derived from the image is lower than $\tau_1$, while all other hypotheses from the same image have ranks larger than $\tau_2$. Consequently, the second hypothesis of $I_1$ is selected as a reliable hypothesis. In Case 2, no hypothesis is selected because it has two hypotheses with rank IDs less than $\tau_2$, indicating a conflict between those hypotheses. Similarly, Case 3 is not selected because none of its hypotheses has rank IDs lower than $\tau_1$. are two predefined ranking thresholds. The second condition ensures that there is no conflicting hypotheses, i.e., no other hypothesis is likely to be true for the same instance as their rationale appears to be unusual. With those criteria, we can collect a set of reliable hypotheses $P$ as samples with their corresponding hypothetical labels. Representative examples of this procedure are depicted in Figure 3. It is important to note that in the second step, we aim to select the most reliable hypothesis rather than correcting hypotheses. This is because we believe that the task of correcting predictions or hypotheses can be better accomplished through the use of semi-supervised learning, which allows for the gradual propagation of pseudo-labels. By focusing on identifying the most reliable hypothesis based on the proximity of the rationale representation to the rationale centroid and the absence of conflicting rationales, we can create a high-quality set of pseudo-labeled samples (see Appendix C and D). These pseudo-labels can then be used in a semi-supervised learning framework to refine the model’s predictions and gradually improve its performance. ### 3.3 Semi-Supervised Learning After completing the second step of hypothesis consolidation, we obtain a reliable pseudo-label set $P$, while the remaining samples are treated as the unlabeled set $U$. At this stage, we are ready to apply a semi-supervised algorithm to perform the final step of adaptation. For this purpose, we utilize one of the state-of-the-art semi-supervised methods, FixMatch [Sohn et al., 2020], which combines consistency regularization and pseudo-labeling to address this task. Specifically, we start by sampling a labeled mini-batch $B_l$ from the reliable pseudo-label set $P$ and an unlabeled batch $B_u$ from the unlabeled set $U$. We then optimize the following objective function using these batches: $$L_{FM} = \sum_{x_b \in B_l} CE(\hat{y}_b, p(A_w(x_b))) + \sum_{x_u \in B_u} I(\max(p(A_w(x_u))) \geq \tau)CE(\hat{y}_u, p(A_s(x_u))),$$ where $\hat{y}_u = \arg\max_c p(y = c|A_w(x_u))$. $A_w(\cdot)$ and $A_s(\cdot)$ are the weakly-augmented and strongly-augmented operations, respectively. $\tau$ is the threshold defined in FixMatch to identify reliable pseudo-label (we set the same with FixMatch as 0.95), and $CE$ is the cross-entropy between two probability distributions. 4 EXPERIMENTS 4.1 EXPERIMENTAL SETUP Datasets. Office-Home [Venkateswara et al., 2017] consists of 15,500 images categorized into 65 classes. It includes four distinct domains: Real-world (Rw), Clipart (Cl), Art (Ar), and Product (Pr). To evaluate the proposed method, researchers perform 12 transfer tasks on this dataset, involving adapting models across the four domains. The evaluation reports each domain shift Top-1 and the average Top-1 accuracy. Originally, the DomainNet dataset [Peng et al., 2019] consisted of over 500,000 images, including six domains and 345 classes. For our evaluation, we follow the approach described in [Saito et al., 2019] and focus on four domains: Real World (Rw), Sketch (Sk), Clipart (Cl), and Painting (Pt), resulting in DomainNet-126. We assess our proposed method on seven domain shifts within these four domains. VisDA-C [Peng et al., 2017] contains 152,000 synthetic images from the source domain and 55,000 real object images from the target domain. It consists of 12 object classes, and there is a significant synthetic-to-real domain gap between the two domains. Our evaluation reports per-class Top-1 accuracies, as well as the average Top-1 accuracy on this dataset. The implementation details of our method can be found in the Appendix A. 4.2 COMPARISON WITH STATE-OF-THE-ARTS Table 1: Accuracy (%) on medium-sized Office-Home dataset (ResNet-50). “SF” denotes source-free. | Method | SF | Ar→Cl | Ar→Pr | Ar→Rw | Cl→Ar | Cl→Pr | Cl→Rw | Pr→Ar | Pr→Cl | Pr→Rw | Rw→Ar | Rw→Cl | Rw→Pr | Avg. | |-----------------|----|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|------| | ResNet-50 [He et al., 2016] | × | 34.9 | 50.0 | 58.0 | 37.4 | 41.9 | 46.2 | 38.5 | 31.2 | 60.4 | 53.9 | 41.2 | 59.9 | 46.1 | | GSDA [Huang et al., 2020] | × | 61.3 | 76.1 | 79.4 | 65.4 | 73.3 | 74.3 | 65.0 | 53.0 | 80.0 | 72.2 | 60.6 | 83.1 | 70.3 | | RSDA [Huang et al., 2020] | × | 53.3 | 77.7 | 81.3 | 66.4 | 74.0 | 76.5 | 67.9 | 53.0 | 82.0 | 75.8 | 57.8 | 85.4 | 70.9 | | SFD [Zhang et al., 2022] | × | 59.1 | 76.4 | 81.0 | 65.5 | 76.2 | 78.0 | 68.0 | 57.5 | 81.4 | 76.4 | 57.3 | 85.1 | 71.3 | | FixBi [Yang et al., 2021b] | × | 58.1 | 77.3 | 80.4 | 67.7 | 79.5 | 78.1 | 65.8 | 57.9 | 81.7 | 76.4 | 62.9 | 86.7 | 72.7 | | G-SFDA [Yang et al., 2021b] | ✓ | 57.9 | 78.6 | 81.0 | 66.7 | 77.2 | 77.2 | 65.6 | 56.0 | 82.2 | 72.0 | 57.8 | 83.4 | 71.3 | | SHOTL [Borg et al., 2020] | ✓ | 56.9 | 78.1 | 81.0 | 67.9 | 78.4 | 78.1 | 67.0 | 54.6 | 81.8 | 73.4 | 58.1 | 84.5 | 71.6 | | SHOTL+ [Borg et al., 2020] | ✓ | 57.9 | 79.7 | 82.5 | 68.5 | 79.9 | 79.3 | 68.5 | 57.0 | 83.0 | 73.7 | 60.3 | 84.9 | 73.0 | | NRC [Yang et al., 2021b] | ✓ | 59.3 | 80.3 | 82.0 | 69.1 | 80.0 | 80.0 | 69.1 | 56.0 | 83.0 | 71.0 | 58.6 | 85.2 | 72.2 | | CoW [Leclercq et al., 2020] | ✓ | 56.9 | 78.4 | 81.0 | 69.1 | 80.0 | 79.9 | 67.7 | 57.2 | 82.4 | 72.8 | 60.5 | 84.5 | 72.5 | | HCL [Bourne et al., 2021] | ✓ | 64.0 | 78.6 | 82.4 | 64.5 | 73.1 | 80.1 | 64.8 | 59.8 | 75.5 | 78.1 | 69.3 | 81.5 | 72.6 | | MMD [Zhang et al., 2022] | ✓ | 59.7 | 79.7 | 81.4 | 69.1 | 79.9 | 79.3 | 69.1 | 56.0 | 82.4 | 74.6 | 61.6 | 84.5 | 72.8 | | YMF [Cui et al., 2022] | ✓ | 57.9 | 77.6 | 82.5 | 68.6 | 79.4 | 80.6 | 68.4 | 55.6 | 83.1 | 75.2 | 59.6 | 84.7 | 72.8 | | SFD [Zhang et al., 2022] | ✓ | 59.7 | 79.5 | 82.4 | 69.7 | 78.6 | 79.2 | 66.1 | 57.2 | 82.6 | 73.9 | 60.8 | 85.2 | 72.9 | | C-SFDA [Karim et al., 2023] | ✓ | 60.3 | 80.2 | 82.9 | 69.3 | 80.1 | 78.8 | 67.3 | 58.1 | 83.4 | 73.6 | 61.3 | 86.3 | 73.5 | | Ours | ✓ | 59.9 | 79.6 | 82.7 | 70.3 | 81.8 | 80.4 | 68.5 | 57.8 | 83.5 | 72.5 | 59.8 | 86.0 | 73.6 | Table 2: Accuracy (%) on large-scale DomainNet-126 dataset (ResNet-50). “SF” denotes source-free. | Method | SF | Rw→Cl | Rw→Pt | Pt→Cl | Cl→Sk | Sk→Pt | Rw→Sk | Pt→Rw | Avg. | |-----------------|----|-------|-------|-------|-------|-------|-------|-------|------| | ResNet-50 [He et al., 2016] | × | 58.8 | 62.2 | 57.7 | 50.3 | 52.6 | 47.3 | 73.2 | 57.4 | | MCC [Jin et al., 2020] | × | 44.8 | 65.7 | 41.9 | 34.9 | 47.3 | 35.3 | 72.4 | 48.9 | | CDAN [Long et al., 2018] | × | 65.0 | 64.9 | 63.7 | 53.1 | 63.4 | 54.5 | 73.2 | 62.5 | | GVB [Cui et al., 2020] | × | 68.2 | 69.0 | 63.2 | 56.6 | 63.1 | 62.2 | 78.3 | 65.2 | | MME [Saito et al., 2019] | × | 70.0 | 67.7 | 69.0 | 56.3 | 64.8 | 61.0 | 76.0 | 66.4 | | TENT [Wang et al., 2020] | ✓ | 58.5 | 65.7 | 57.9 | 48.5 | 52.4 | 54.0 | 67.0 | 57.7 | | G-SFDA [Yang et al., 2021b]| ✓ | 63.4 | 67.5 | 62.5 | 55.3 | 60.8 | 58.3 | 75.2 | 63.3 | | NRC [Yang et al., 2021b] | ✓ | 67.5 | 68.0 | 67.8 | 57.6 | 59.3 | 58.7 | 74.3 | 64.7 | | SHOT [Lianga et al., 2019] | ✓ | 67.7 | 68.4 | 66.9 | 60.1 | 66.1 | 59.9 | 80.8 | 67.1 | | AdaConstrast [Chen et al., 2022] | ✓ | 70.2 | 69.8 | 68.6 | 58.0 | 65.9 | 61.5 | 80.5 | 67.8 | | DaC [Zhang et al., 2022] | ✓ | 70.0 | 68.8 | 70.9 | 62.4 | 66.8 | 60.3 | 78.6 | 68.3 | | C-SFDA [Karim et al., 2023]| ✓ | 70.8 | 71.1 | 68.5 | 62.1 | 67.4 | 62.7 | 80.4 | 69.0 | | GPL [Litrico et al., 2023] | ✓ | 74.2 | 70.4 | 68.8 | 64.0 | 67.5 | 65.7 | 76.5 | 69.6 | | Ours | ✓ | 76.9 | 71.8 | 75.4 | 65.5 | 69.9 | 64.6 | 83.2 | 72.5 | * This work uses ResNet-34 as backbone. We compare our proposed method against popular source-present and source-free methods on three benchmark datasets: Office-Home, DomainNet-126, and VisDA-C. We report the Top-1 accuracy, and the results are presented in Table 1 to Table 3. In the Office-Home dataset, as shown in Table 1, our proposed method achieves the best performance in terms of Top-1 average accuracy, which is comparable to the most recent source-free method C-SFDA. Additionally, our method in 3 sub-transfer tasks achieves the highest accuracy (see bold in Table 1) vs. only one sub-transfer task in C-SFDA. For the DomainNet-126 dataset, as demonstrated in Table 2, our proposed method exhibits Table 3: Accuracy (%) on large-scale VisDA-C dataset (ResNet-101). “SF” denotes source-free. | Method | SF | plane | bcycle | bus | car | horse | knife | mcyle | person | plant | sktbrd | train | truck | Avg. | |-----------------|----|-------|--------|-----|-----|-------|-------|-------|--------|-------|--------|-------|-------|------| | ResNet-101 [He et al., 2016] | × | 55.1 | 53.3 | 61.9 | 59.1 | 80.6 | 17.9 | 79.7 | 31.2 | 81.0 | 26.5 | 73.5 | 8.5 | 52.4 | | MCC [Jin et al., 2020] | × | 88.7 | 80.3 | 80.5 | 71.5 | 90.1 | 93.2 | 85.0 | 71.6 | 89.4 | 73.8 | 85.0 | 36.9 | 78.8 | | STAR [Zhu et al., 2020] | × | 95.0 | 84.0 | 84.6 | 73.0 | 91.6 | 91.8 | 85.9 | 78.4 | 94.4 | 84.7 | 87.0 | 42.2 | 82.7 | | RWO [Xu et al., 2020] | × | 95.1 | 87.4 | 85.2 | 58.6 | 96.2 | 95.7 | 90.6 | 80.0 | 94.8 | 90.8 | 88.4 | 47.9 | 84.3 | | CAN [Kang et al., 2019] | × | 97.0 | 87.2 | 82.5 | 74.3 | 97.8 | 96.2 | 90.8 | 80.7 | 96.6 | 96.3 | 87.5 | 59.9 | 87.2 | | SHOT [Jiang et al., 2020a] | ✓ | 94.3 | 88.5 | 80.1 | 57.3 | 93.1 | 94.9 | 80.7 | 80.3 | 90.5 | 89.1 | 86.3 | 58.2 | 82.9 | | DIPE [Wang et al., 2022] | ✓ | 95.2 | 87.6 | 78.8 | 55.9 | 93.9 | 95.0 | 84.1 | 81.7 | 92.1 | 88.9 | 85.4 | 58.9 | 83.1 | | HCL [Huang et al., 2021] | ✓ | 93.3 | 85.4 | 80.7 | 68.5 | 91.0 | 88.1 | 86.0 | 78.6 | 86.6 | 88.8 | 80.0 | 74.7 | 83.5 | | A-FNet [Zhang et al., 2021] | ✓ | 94.0 | 87.8 | 82.4 | 66.8 | 93.1 | 92.5 | 85.8 | 81.2 | 91.6 | 88.2 | 86.0 | 56.0 | 84.3 | | G-SFD [Yang et al., 2021b] | ✓ | 96.1 | 88.3 | 85.5 | 72.4 | 97.1 | 95.4 | 89.5 | 79.9 | 95.2 | 92.9 | 90.1 | 42.6 | 85.4 | | NRC [Yang et al., 2021a] | ✓ | 96.8 | 91.3 | 82.4 | 62.4 | 96.2 | 95.9 | 86.1 | 80.6 | 94.8 | 94.1 | 90.4 | 59.7 | 85.9 | | SFD-A [Ding et al., 2022] | ✓ | 95.3 | 91.2 | 77.5 | 72.1 | 95.7 | 97.8 | 85.5 | 86.1 | 95.5 | 93.0 | 86.3 | 61.6 | 86.5 | | AdaContrast [Chen et al., 2022]| ✓ | 97.0 | 84.7 | 84.0 | 77.3 | 96.7 | 93.8 | 91.9 | 84.8 | 94.3 | 93.1 | 94.1 | 49.7 | 86.8 | | CoWA [Lee et al., 2022] | ✓ | 96.2 | 89.7 | 83.9 | 73.8 | 96.4 | 97.4 | 89.3 | 86.8 | 94.6 | 92.1 | 88.7 | 53.8 | 86.9 | | DaC [Zhang et al., 2022] | ✓ | 96.6 | 86.8 | 86.4 | 78.4 | 96.4 | 96.2 | 93.6 | 83.8 | 86.8 | 95.1 | 89.6 | 50.0 | 87.3 | | BD1 [Karim et al., 2023] | ✓ | - | - | - | - | - | - | - | - | - | - | - | - | - | | C-SFD [Karim et al., 2023] | ✓ | 97.6 | 88.8 | 86.1 | 72.2 | 97.2 | 94.4 | 92.1 | 84.7 | 93.0 | 90.7 | 93.1 | 63.5 | 87.8 | | Ours | ✓ | 98.0 | 88.0 | 86.4 | 82.3 | 97.8 | 96.2 | 92.1 | 85.0 | 95.5 | 91.7 | 93.8 | 56.2 | 88.6 | Table 4: Ablation study of the proposed components calculated by average accuracy (%) on the Office-Home (O-H), DomainNet-126 (DN-126) and VisDA-C datasets. PA stands for model pre-adaptation (Sec. 3.1), HCPR (Sec. 3.2) stands for hypothesis consolidation from prediction rationale, FM stands for FixMatch techniques (Sec. 3.3). | # | PA | HCPR | FM | O-H | DN-126 | VisDA-C | |---|----|------|----|-----|--------|---------| | 0 | × | × | × | 60.2| 55.6 | 46.6 | | 1 | × | ✓ | × | 64.2| 60.6 | 62.3 | | 2 | ✓ | | ✓ | 68.6| 70.6 | 85.2 | | 3 | ✓ | | × | 72.1| 67.4 | 86.2 | | 4 | ✓ | | ✓ | 72.7| 69.6 | 87.5 | | 5 | ✓ | | ✓ | 72.2| 67.5 | 86.2 | | 6 | ✓ | | ✓ | 73.6| 72.5 | 88.6 | Table 5: DomainNet-126 (Pt→Cl) Top-1 accuracy (%) of the proposed method with different number of the prediction hypotheses \( \tilde{k} \). We find \( \tilde{k} = 4 \) yields the optimal results. | \( \tilde{k} \) | Accuracy | |----------------|----------| | 2 | 73.7 | | 3 | 74.2 | | 4 | 75.4 | | 5 | 75.3 | | 6 | 74.8 | | 10 | 71.8 | | 20 | 66.9 | significant improvements over all baselines. With an average Top-1 accuracy of 72.5%, our method outperforms the best source-free baseline by nearly 3% and surpasses the best source-present baseline by 6.1%. Moreover, our method achieves the best performance in almost all domain shifts. On the VisDA-C dataset, presented in Table 3, our proposed method outperforms the state-of-the-art method C-SFDA [Karim et al., 2023] by 0.8%. Furthermore, our method achieves the best performance in specific classes such as “plane”, “bus”, “car”, and “horse”. These results clearly demonstrate the superiority of our proposed method across the evaluated datasets, showcasing its effectiveness in source-free domain adaptation scenarios. 4.3 Ablation Studies Component-wise analysis. In this section, we conduct ablation studies to analyze the contribution of each component in our method on three benchmark datasets: Office-Home, DomainNet-126, and VisDA-C. The results are summarized in Table 4. Each component of our methods helps to enhance the performance, in which the HCPR (Hypothesis Consolidation from Prediction Rationale) component makes the most contributions to the promotion of accuracy. Specifically, compared to only using FixMatch, combining both FixMatch and HCPR significantly improves accuracy by 4.4%, 10.0%, and 22.9% on the respective datasets. Additionally, in the case of combining both PA (Pre-Adaptation) and HCPR, we execute PA again following HCPR to integrate the consolidation outcomes from HCPR. This showcases a substantial enhancement in accuracy, with improvements of 0.6%, 2.2%, and 1.3% on the respective datasets compared to solely employing PA. Last but not least, Removing HCPR from the method leads to a performance drop of 1.4%, 5%, and 2% points on Office-Home, DomainNet-126, and VisDA-C, respectively. Impact of $\tilde{k}$ —the number of prediction hypotheses per instance. In our method, we choose labels from the top $\tilde{k}$ highest posterior probabilities as the prediction hypothesis. In this section, we investigate the impact of the value of $\tilde{k}$. Table 5 shows the accuracy achieved with different $\tilde{k}$. From the result, we can see that using 2 hypotheses has already led to good performance. When $\tilde{k}$ is very large, the performance drop significantly. As a result, it is recommended to set $\tilde{k}$ in a smaller range, and specifically, choosing 3-6 hypotheses leads to optimal performance. Impact of the two ranking thresholds $\tau_1$ and $\tau_2$. To assess the influence of ranking thresholds in our method, we examined the percentage values $\tau_1$ and $\tau_2$ relative to the total number of samples. Specifically, we analyzed their impact on the Top-1 average accuracy on the VisDA-C dataset, as illustrated in Figure 4. Our analysis, depicted in Figure 4, revealed that the proposed method exhibits robustness to the specific values of $\tau_1$ and $\tau_2$. The benefit of using rationale representations. To further understand the benefit of using the rationale representation from multiple hypotheses, we explore an alternative method that replaces the proposed second step by using feature centroids rather than rationale centroids. Since the feature is invariant to the prediction hypothesis, only the top predicted class will be considered. More specially, we first generate pseudo-label for each instance and calculate the feature centroid similar to our approach. Then we rank instances based on the Euclidean distances between their features and the corresponding class centroid. The top $\tau_1$ features closest to the class centroid are assigned reliable pseudo labels, while the remaining samples are left for step 3. We refer to this method as “near-centroid selection”. Table 6 presents the comparison results on the Office-Home and DomainNet-126 datasets. As seen, while such an approach still leads to improvement over using step 1 and step 3 alone (by cross-referencing Table 4), it is still inferior to the use of HCPR. This clearly demonstrates the benefits of the latter. Investigation of recursively applying HCPR. One may wonder if recursively applying HCPR will lead to additional improvement. To this end, we create a variant of our method by alternatively applying step 2 and step 3, hoping that they may mutually enhance each other. We conducted experiments on the Office-Home (Cl$\rightarrow$Pr) dataset. The results are depicted in Figure 5, where the red curve represents our method using the second step only once, i.e., the hypothesis consolidation occurs between model pre-adaptation (0-9 epochs) and | Method | O-H | DN-126 | |----------------------|-----|--------| | near-centroid selection | 72.6 | 69.6 | | Ours | **73.6** | **72.5** | Table 7: Accuracy (%) of our method combined with existing SHOT and AaD methods on the Office-Home, VisDA-C and DomainNet-126 datasets. | Method | Ar→Cl | Ar→Pr | Ar→Rw | Cl→Ar | Cl→Pr | Cl→Rw | Pr→Ar | Pr→Cl | Pr→Rw | Rw→Ar | Rw→Cl | Rw→Pr | Avg. | |-----------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|------| | SHOT Liang et al. (2020a) | 56.9 | 78.1 | 81.0 | 67.9 | 78.4 | 78.1 | 67.0 | 54.6 | 81.8 | 73.4 | 58.1 | 84.5 | 71.6 | | SHOT+Ours | 58.7 | 79.5 | 82.1 | 69.6 | 80.7 | 80.0 | 69.1 | 56.9 | 82.3 | 74.5 | 59.2 | 85.3 | 73.2 | | AaD Yang et al. (2022) | 59.3 | 79.3 | 82.1 | 68.9 | 79.8 | 79.5 | 67.2 | 57.4 | 83.1 | 72.1 | 58.5 | 85.4 | 72.7 | | AaD+Ours | 59.8 | 79.4 | 82.7 | 70.0 | 81.6 | 80.0 | 68.5 | 57.6 | 83.2 | 72.7 | 59.4 | 86.1 | 73.4 | | Method | plane | bcycle | bus | car | horse | knife | mcyle | person | plant | sktbrd | train | truck | Avg. | |-----------------|-------|--------|-----|-----|-------|-------|-------|--------|-------|--------|-------|-------|------| | SHOT Liang et al. (2020a) | 94.3 | 88.5 | 80.1 | 57.3 | 93.1 | 94.9 | 80.7 | 80.3 | 90.5 | 89.1 | 86.3 | 58.2 | 82.9 | | SHOT+Ours | 97.5 | 84.6 | 83.0 | 74.2 | 96.5 | 93.7 | 92.8 | 86.7 | 93.5 | 92.6 | 89.7 | 56.9 | 86.8 | | AaD Yang et al. (2022) | 97.4 | 90.5 | 80.8 | 76.2 | 97.3 | 96.1 | 89.8 | 82.9 | 95.5 | 93.0 | 92.0 | 64.0 | 88.0 | | AaD+Ours | 97.8 | 87.6 | 86.7 | 83.4 | 97.7 | 95.4 | 94.2 | 83.8 | 94.6 | 91.2 | 92.8 | 55.6 | 88.4 | | Method | Rw→Cl | Rw→Pt | Pt→Cl | Pt→Rw | Cl→Sk | Sk→Pt | Rw→Sk | Pt→Rw | Avg. | |-----------------|-------|-------|-------|-------|-------|-------|-------|-------|------| | SHOT Liang et al. (2020a) | 67.7 | 68.4 | 66.9 | 60.1 | 66.1 | 59.9 | 80.8 | 80.8 | 67.1 | | SHOT+Ours | 70.5 | 70.6 | 72.5 | 63.6 | 68.0 | 61.1 | 82.8 | | 69.9 | | AaD Yang et al. (2022) | 70.6 | 69.8 | 69.3 | 58.5 | 66.2 | 60.2 | 80.2 | | 67.8 | | AaD+Ours | 75.4 | 71.3 | 75.2 | 64.2 | 68.4 | 63.3 | 82.8 | | 71.5 | semi-supervised learning (10-40 epochs). The blue curve represents our method with the second step updated at the 15th, 20th, and 25th epochs. From the results, we observed that recursively applying HCPR does not lead to an improvement as one may expect. We also conduct experiments with HCPR applied recursively to only PA or FixMatch, which can be found in the Appendix E. 4.4 Incorporating the Proposed Method into Existing Approaches The proposed method can be seamlessly integrated into existing network architectures, such as SHOT (Liang et al., 2020a) and AaD (Yang et al., 2022). Specifically, we replace the pre-adaptation phase in our first step with SHOT and AaD, resulting in the combined approach referred to as “SHOT+Ours” and “AaD+Ours”. The experimental results, as shown in Table 7, demonstrate the superiority of the proposed method integrated into the SHOT and AaD objectives. Across the Office-Home (Avg. ↑ 1.6% and ↑ 0.7%), VisDA-C (Avg. ↑ 3.9% and ↑ 0.4%), and DomainNet-126 (Avg. ↑ 2.8% and ↑ 3.7%) datasets, the integrated approach consistently outperforms the baseline of SHOT and AaD. This indicates that our method complements existing SFUDA baselines and consistently improves their performance by incorporating our approach as a replacement for the model pre-adaptation phase. 5 Limitation and Future Work The current approach relies on having access to the entire target training set to perform crucial steps like pre-adaptation and identifying the reliable pseudo-labeled set. However, in real-world applications, online adaptation is often more desirable as it doesn’t require holding a large number of target examples. As part of our future work, we aim to extend the key idea of this research to the online streaming setting. By doing so, we can develop a methodology that adapts in real time to incoming data, allowing for more efficient and effective adaptation in dynamic environments. This extension will enhance the applicability and practicality of the proposed approach in various domains. 6 Conclusion In conclusion, this paper introduces a novel approach for Source-Free Unsupervised Domain Adaptation (SFUDA), where a model needs to adapt to a new domain without access to target domain labels or source domain data. By considering multiple prediction hypotheses and analyzing their rationales, the proposed method identifies the most likely correct hypotheses, which are then used as pseudo-labeled data for a semi-supervised learning procedure. The three-step adaptation process, including model pre-adaptation, hypothesis consolidation, and semi-supervised learning, ensures optimal performance. Experimental results demonstrate that the proposed approach achieves state-of-the-art performance in the SFUDA task and can be seamlessly integrated into existing methods to enhance their performance. REFERENCES Dian Chen, Dequan Wang, Trevor Darrell, and Sayna Ebrahimi. Contrastive test-time adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 295–305, 2022. Shuhao Cui, Shuhui Wang, Junbao Zhuo, Chi Su, Qingming Huang, and Qi Tian. Gradually vanishing bridge for adversarial domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12455–12464, 2020. Shuyang Dai, Yu Cheng, Yizhe Zhang, Zhe Gan, Jingjing Liu, and Lawrence Carin. Contrastively smoothed class alignment for unsupervised domain adaptation. In Proceedings of the Asian Conference on Computer Vision, 2020. Ning Ding, Yixing Xu, Yehui Tang, Chao Xu, Yunhe Wang, and Dacheng Tao. Source-free domain adaptation via distribution estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7212–7222, 2022. Hao Feng, Minghao Chen, Jinming Hu, Dong Shen, Haifeng Liu, and Deng Cai. Complementary pseudo labels for unsupervised domain adaptation on person re-identification. IEEE Transactions on Image Processing, 30:2898–2907, 2021. Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In International conference on machine learning, pp. 1180–1189. PMLR, 2015. Xiang Gu, Jian Sun, and Zongben Xu. Spherical space domain adaptation with robust pseudo-label loss. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9101–9110, 2020. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei Efros, and Trevor Darrell. Cycada: Cycle-consistent adversarial domain adaptation. In International conference on machine learning, pp. 1989–1998. Pmlr, 2018. Lanqing Hu, Meina Kan, Shiguang Shan, and Xilin Chen. Unsupervised domain adaptation with hierarchical gradient synchronization. In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, pp. 4043–4052, 2020. Jiaxing Huang, Dayan Guan, Aoran Xiao, and Shijian Lu. Model adaptation: Historical contrastive learning for unsupervised domain adaptation without source data. Advances in Neural Information Processing Systems, 34:3635–3649, 2021. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pp. 448–456. pmlr, 2015. Ying Jin, Ximei Wang, Mingsheng Long, and Jianmin Wang. Minimum class confusion for versatile domain adaptation. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXI 16, pp. 464–480. Springer, 2020. Mengmeng Jing, Xiantong Zhen, Jingjing Li, and Cees Snoek. Variational model perturbation for source-free domain adaptation. Advances in Neural Information Processing Systems, 35: 17173–17187, 2022. Guoliang Kang, Lu Jiang, Yi Yang, and Alexander G Hauptmann. Contrastive adaptation network for unsupervised domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4893–4902, 2019. Nazmul Karim, Niluthpol Chowdhury Mithun, Abhinav Rajvanshi, Han-pang Chiu, Supun Samarakkera, and Nazanin Rahnavard. C-sfda: A curriculum learning aided self-training framework for efficient source free domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 24120–24131, 2023.
QHzzAU7Qf9
The presented method is a weighted average of parameters. How does such a method regularise against particularly known failure modes of MoEs which often require explicit regularisation such as through load, importance, entropy, or Mutual Information?
SOFT MERGING OF EXPERTS WITH ADAPTIVE ROUTING Anonymous authors Paper under double-blind review ABSTRACT Neural networks that learn to route their inputs through different “expert” subnetworks provide a form of modularity that standard dense models lack. Despite their possible benefits, modular models with learned routing often underperform their parameter-matched dense counterparts as well as models that use non-learned heuristic routing strategies. In this paper, we hypothesize that these shortcomings stem from the gradient estimation techniques used to train modular models that use non-differentiable discrete routing decisions. To address this issue, we introduce Soft Merging of Experts with Adaptive Routing (SMEAR), which avoids discrete routing by using a single “merged” expert constructed via a weighted average of all of the experts’ parameters. By routing activations through a single merged expert, SMEAR does not incur a significant increase in computational costs and enables standard gradient-based training. We empirically validate that models using SMEAR outperform models that route based on metadata or learn routing through gradient estimation. Furthermore, we provide qualitative analysis demonstrating that the experts learned via SMEAR exhibit a significant amount of specialization. 1 INTRODUCTION Neural networks typically use all of their parameters to process a given input. As such, the capabilities of a model are distributed across the parameters of a model in a self-organizing way (Zeiler & Fergus, 2014; De Cao et al., 2021; Csordás et al., 2021; Bau et al., 2020; Wang et al., 2022a). Explicitly specializing different parts of a model to different capabilities can provide various benefits, including reduced interference across downstream tasks (Sanh et al., 2021; Wei et al., 2021; Zamir et al., 2018; Bao et al., 2021) or languages (Pires et al., 2019; Liu et al., 2020; Xue et al., 2020). Furthermore, dedicating specific parameters to specific capabilities enables a form of modularity where a capability can be added, removed, or modified by adding, removing, or modifying the corresponding parameters (Pfeiffer et al., 2023). Activating only a subset of the model’s parameter for a given input also decouples the computational cost of a model from the number of parameters it has (Shazeer et al., 2017; Fedus et al., 2021), though we do not focus on this benefit in this paper. Conditional computation techniques provide a way to build models that adaptively choose a subset of their parameters to apply to a given input. A common way to use conditional computation in this setting is to introduce specialized subnetworks called experts that are controlled by routers that decide which experts should be active. When the model is trained on diverse data, this form of conditional computation can enable modular learning by allowing experts to specialize to different types of inputs and flexibly share knowledge (Ma et al., 2019). However, because routing involves making a discrete decision as to which expert to use, the loss on the model’s prediction cannot back-propagate though the routing decision to update the router. Consequently, models with conditional computation often require gradient estimation techniques for training (Clark et al., 2022; Fedus et al., 2021; Bengio et al., 2013). In practice, past work has shown that models with conditional computation do not always learn effective routing strategies. For example, Mittal et al. (2022) investigate models with a continuous router in a controlled setting and find the models do not route examples from the same group to the same experts and perform poorly compared to models with oracle routing. However, models with task- or domain-specific subnetworks (Gururangan et al., 2021; Kudugunta et al., 2021) provide evidence that it is possible to train performant models with specialized experts. As an extreme example, Roller et al. (2021) achieves results comparable to learned routing with a fixed random routing. Relatedly, Fedus et al. (2021) find the gain from scaling up parameters by $30\times$ with a sparsely activated model is smaller than scaling up both parameters and FLOPs by $3\times$ in a dense model. As a possible explanation, Clark et al. (2022) study how models with conditional computation improve with scale and find a detrimental term that scales with the product of the log number of experts and active parameters. In this work, we hypothesize that issues with conditional computation stem from issues with gradient estimation. Specifically, we focus on experimental settings where we can compare learned routing to a performant hand-designed heuristic routing scheme. We find that the gradient estimation techniques we consider often produce models that underperform heuristic routing, despite the fact that they could in principle learn a better routing strategy. To address this shortcoming, we introduce **Soft Merging of Experts with Adaptive Routing** (SMEAR), a method for training modular models with specialized experts and learned routing. SMEAR works by using the router’s distribution over experts to compute a weighted average of the parameters of the individual experts. Activations are then sent through the merged expert, which results in a similar computational cost to discrete routing with a single expert. However, the fact that all components of SMEAR are fully differentiable enables standard gradient-based training. Empirically, we show that SMEAR significantly attains a favorable performance/cost tradeoff to 1) discrete routing solutions found via gradient estimation, 2) heuristic routing schemes, and 3) state-of-the-art baselines for learning modular models. We also qualitatively validate that the experts learned by SMEAR specialize to different types of inputs and share parameters across related tasks. Put together, our results show that SMEAR provides an effective alternative for modular models that use adaptive routing among expert subnetworks. ### 2 BACKGROUND To provide the necessary background for our work, we first explain how sparsely activated neural networks use conditional computation, then discuss gradient estimators that enable learning discrete routing strategies. In addition, we discuss different ways to hand-design “heuristic” routing strategies as well as preexisting techniques for learning modular models that we use as baselines. In models that use discrete routing among experts (i.e. subnetworks), experts are organized into blocks that are incorporated as an intermediate layer in a neural network. An expert routing block $B$ comprises a set of $N$ experts $\{f_1, f_2, \ldots, f_N\}$ and a router $R$. Experts in the same block accept inputs and produce outputs of the same dimensionality. Given a hidden-state representation $u$, the output of the $i$-th expert with parameters $\theta_i$ is $f_i(u, \theta_i)$. In sparsely activated models that involve discrete adaptive routing, it is not possible to train the router’s parameters with standard gradient-based learning. Fortunately, gradient estimators can provide approximate gradients to the router parameters. There are a few common designs shared by models that use gradient estimators to train routers. Their router $R$ often applies a lightweight network to some intermediate hidden states $v$ in the model. The output of the lightweight routing network $R(v)$ parameterizes a discrete probability distribution over the $N$ experts. Different gradient estimators vary in how they make the routing decision from $R(v)$ and how they construct the output from the chosen expert. **REINFORCE** Gradients can be estimated through discrete operations using reinforcement learning techniques (Schulman et al., 2015; Bengio et al., 2013). In reinforcement learning, a policy loss is used to train an agent to learn optimal actions in an environment. In this paper, we experiment with the REINFORCE algorithm which computes the policy loss as $\log(\pi)r$ where $r$ denotes the received reward for taking an action whose assigned probability is $\pi$. When applied to models that use discrete routing among experts, the goal is to train the model to choose the optimal expert to process a given input. Here, the router $R$ acts an agent that samples an expert to use according to the routing probabilities. In order to train such a router, the router’s assigned probability to the sampled expert is used as $\pi$ and the negative of the model’s loss is used as the reward $r$. The router is therefore trained to pick experts that maximize the reward which, in turn, minimizes the loss. REINFORCE estimator often suffers from high variance because of the sampling operation. This motivates the use of baselines, which reduce variance without changing the optimal solution. In our work, we follow Clark et al. (2022) and use a baseline \( b \) that is generated by a small neural network with a single hidden layer that takes as input \( v \) and is trained with the Huber loss. The overall loss function is then \[ L = -\mathbb{E}_{i \sim R(v)} \alpha \log R(v)_i (r - b) - \beta R(v) \log R(v) + \gamma L_{\text{Huber}}(r, b) \] where \( \alpha, \beta, \) and \( \gamma \) are hyperparameters that correspond to policy gradient weight, policy entropy weight, and value loss weight. In practice, we approximate the expectation with a single sample. During inference, the output of the block \( B \) is just \( f_i(u, \theta_i) \) where \( i = \arg \max R(v) \). **Straight Through Gumbel-Softmax (ST-Gumbel)** The Gumbel-Softmax trick (Jang et al., 2016; Maddison et al., 2016) provides a continuous differentiable approximation to sampling from a categorical distribution like the one parameterized by a router. Specifically, Gumbel noise is added to the logits of the distribution and a temperature scale is applied in the softmax operation. The expert \( f_i \) with the highest assigned probability is chosen by applying an arg max operation. In order to approximate gradients through the arg max operation, we use the Straight-Through estimator which replaces \( f_i(u, \theta_i) \) with \[ (1 - \text{sg}[\hat{R}(v)_i]) + \hat{R}(v)_i f_i(u, \theta_i) \] where sg stands for the stop-gradient operator. During forward pass, the multiplier for \( f_i(u, \theta_i) \) becomes 1 and the multiplier receives gradients for the term \( \hat{R}(v)_i \) in the backward pass. In practice, the temperature \( \tau \) is gradually annealed from a high to low value so that the approximated samples are more and more similar to discrete samples. During inference, we choose an expert according to \( \arg \max R(v) \). **Top-\( k \)** Shazeer et al. (2017) propose a gradient estimation scheme where the router sends the input through the \( k \) experts that are assigned the highest probability. Fedus et al. (2021) later found that this router could be used effectively when \( k = 1 \). Specifically, the estimator selects the subnetwork with the highest probability and scales its output using its corresponding routing probability. The output of the block is therefore \( R(v), f_i(u, \theta_i) \), where \( i = \arg \max R(v) \). **DSelect-\( k \)** Hazimeh et al. (2021) proposed a differentiable approximation for the discrete Top-\( k \) operation. They parameterize each selection among \( N \) experts using \( m \) binary variables \( z_1, z_2, \ldots, z_m \), produced using a learnable weight \( W \) as \( z = W(v) \), where \( m = \log_2(N) \). The selector function \( r \) takes these variables and computes \[ r(z)_i = \prod_{j \in B(i-1)} z_j \prod_{j \in 1, \ldots, m \setminus B(i-1)} (1 - z_j), \] where \( i \in 1, \ldots, N \) and \( B(l) \) returns the non-zero indices in the binary representation of the integer \( l \). A differentiable step function \( S \) based on a cubic polynomial is used for the \( z \) variables. Finally, each selector operation happens \( k \) times to get Top-\( k \) selection and an additional learnable parameter \( G \) provides a probability distribution over these \( k \) selections using the softmax function. In addition, entropy regularization is applied to the output of the selector function \( r \), ensuring that it results in one-hot selection during inference. In our work, we consider \( k = 1 \) to maintain a computational cost similar to other baselines. As a point of comparison for techniques that learn adaptive routing, we experiment with three baseline routing strategies that do not require a trained router. **Tag Routing** If we have prior knowledge about the data that a model will be applied to, we can hand-design a heuristic routing strategy for choosing which expert to use for a given example based on data properties. Tag routing takes advantage of “tags” associated with a given example (such as its domain or task) and associates each expert in an expert routing block with a particular tag. In this work, we assume each example has a single tag and route each example to its tag’s expert. **Hash Routing** Roller et al. (2021) propose hash routing, which uses a fixed hashing function to determine which expert to use for a given example. Specifically, each example is assigned a random expert choice in each expert routing block which is used consistently over the course of training and inference. This approach disregards any shared characteristics across examples. **Single-Expert** As an additional baseline, we consider models where all inputs are routed to a single expert in each routing block. To provide a fair comparison to models with \( N \) experts per block on the basis of both computational cost or parameter count, we consider models with a single expert with either the same number (compute-matched, referred to as “\( 1 \times \) compute”) or \( N \times \) (parameter-matched, referred to as “\( 1 \times \) parameters”) as many parameters as a single expert. Beyond the simple baseline discussed above, we consider three recently proposed methods that aim to learn modular models. **Adamix** Adamix (Wang et al., 2022b) uses random routing for each example during training and adds a consistency loss to encourage experts to share information and discourage divergence. During inference, the parameters of all experts are averaged together to form a single expert and no adaptive routing is used. **Latent Skills** Latent Skills (Ponti et al., 2022) assumes that the task for each example is known and trains a task-skill matrix that specifies which experts are active for a given task. The binary task-skill matrix is fixed and learned via the Gumbel-Sigmoid trick Maddison et al. (2016). During inference, a merged expert is formed for each task by averaging the parameters of the skill experts weighted according to the task-skill matrix. **Soft MoE** Puigcerver et al. (2023) recently proposed Soft MoE, which assigns “slots” to each expert and passes a weighted average of input tokens into each slot. All operations in Soft MoE method are differentiable, avoiding the need for the gradient estimation. We consider Soft MoE with a single slot per expert to ensure fair comparison by having computational cost equivalent to other discrete routing baselines. ### 3 SOFT MERGING OF EXPERTS WITH ADAPTIVE ROUTING As we will later show in section 4, the gradient estimation techniques used to train models with discrete routing often fail to produce performant routing strategies. Our goal in this work is therefore to explore whether it is possible to train models with adaptive routing among experts without resorting to gradient estimation. Specifically, we aim to achieve better performance by designing an expert and router architecture that facilitates standard end-to-end gradient-based training but does not increase computational costs. **Ensemble Routing** One simple idea would be to pass the input of a given expert routing block through every expert, and then compute an average of the experts’ outputs weighted according to the router’s distribution, i.e. exactly computing $\mathbb{E}_{i \sim R(v)} f_i(u, \theta_i)$. We refer to this approach as an ensemble routing strategy since it corresponds to using the ensemble prediction of the experts. Since the operations involved in computing the average are all differentiable, using an ensemble routing strategy would allow for exact computation of gradients and end-to-end-learning. Unfortunately, such an approach would incur a significant increase in computational costs because it requires computing the output of every expert rather than a single expert. **Merging Experts** To explore an alternative fully-differentiable expert routing block, we take inspiration from recent work on merging models (Matena & Raffel, 2021; Wortsman et al., 2022b,c; Choshen et al., 2022b; Don-Yehiya et al., 2022; McMahan et al., 2017). These works have shown that averaging the parameters of models that share a common architecture can often produce an aggregate model that shares the capabilities of the individual models. Notably, Wortsman et al. (2022b); Matena & Raffel (2021) found that averaging the weights of multiple fine-tuned models produced a single model that performs comparably to an ensemble of the models. In addition, both Adamix (Wang et al., 2022b) and Latent Skills (Ponti et al., 2022) include steps that involve averaging expert parameters, though neither of these methods learn an adaptive per-example routing strategy. Motivated by these findings, we propose **Soft Merging of Experts with Adaptive Routing (SMEAR)**, which constructs a single merged expert whose parameters are computed as the weighted average of the experts within a routing block. Each expert’s weight is set according to the corresponding routing probability generated by the router. In SMEAR, the input to the routing block is fed into the merged expert and the merged expert’s output is used as the output of the block. By averaging parameters, SMEAR implicitly assumes that all experts in the routing block share an identical architecture (thereby inducing a natural one-to-one mapping between parameters in each expert). To the best of our knowledge, all past works focused on routing among experts use experts with a common architecture, so we do not see this assumption as a major limitation. More explicitly, we define SMEAR as computing the output of an expert routing block using a merged expert computed as $f(u, \sum_i R(v_i)\theta_i)$. The merged expert shares the same architecture with the individual experts $f_i$. Notably, the input of the routing block is only ever processed by $\bar{f}$; activations are never fed to any of the individual experts. To break symmetry, all experts are randomly initialized with different parameter values. Importantly, all operations in SMEAR are fully differentiable, enabling standard gradient-based end-to-end learning. In addition, SMEAR retains the ability to learn an adaptive routing strategy that can route different examples to different experts without relying on hand-specified tags (as in Latent Skills and tag-based routing). We will later show qualitatively that this leads to meaningful specialization of different experts in real-world experiments. **Computational Costs** Importantly, SMEAR only ever computes the output of a single expert, suggesting that SMEAR’s computational cost could be comparable to single-expert discrete routing and significantly lower than ensemble routing. However, the averaging operation in SMEAR incurs a nontrivial computational cost. To quantify this cost, we focus on the common expert architecture comprising a dense layer that projects from $d$-dimensional activations to an $m$-dimensional vector followed by a nonlinearity and an additional dense layer projecting from $m$ dimensions back to $d$. For simplicity, we ignore the (relatively minor) cost of the nonlinearity. We assume the input is a length-$L$ sequence of activations with size $L \times d$. In this case, computing the output of the merged experts incurs a computational cost of approximately $L \times 4 \times d \times m$ FLOPs and ensemble routing with $N$ experts would require $N \times L \times 4 \times d \times m$ FLOPs. SMEAR additionally must average together the parameters of $N$ experts, which costs an additional $N \times 2 \times d \times m$ FLOPs. Some past work on models with discrete routing has the router choose a different expert for each entry in the input sequence of activations (e.g. Fedus et al., 2021; Lewis et al., 2021; Roller et al., 2021). This would require computing the expert average $L$ times, which would make the cost of SMEAR similar to that of ensemble routing. We therefore focus on settings where models make a single routing choice for an entire input example (e.g. Gururangan et al., 2021; Kudugunta et al., 2021; Ye et al., 2022). This results in a total cost of approximately $(L \times 4 + N \times 2) \times d \times m$ for SMEAR. Consequently, as long as $L \times 4 \gg N \times 2$, SMEAR and discrete routing have roughly the same computational costs. Given that $L$ is on the order of hundreds or thousands of tokens for text-based tasks and on the order of thousands for vision tasks, $L \times 4$ will be much larger than $N \times 2$ as long as there is a modest number of experts. In our experiments within the T5-GLUE setting, where $L = 128$ and $N = 8$, this results in a minimal runtime difference. Furthermore, we would expect SMEAR to be approximately $\frac{N \times L}{N + L}$ times cheaper than ensemble routing. More concretely, we will later experimentally validate that the wall-clock time required to process an example with SMEAR in real-world experiments is roughly the same as using discrete routing and significantly faster than ensemble routing. ### 4 EXPERIMENTS In order to thoroughly evaluate the effectiveness of SMEAR, we perform experiments in two real-world settings that differ in model architecture and modality. We are particularly interested in whether a given approach for learning routing outperforms the heuristic routing strategies described in section 2. As such, we focus on experimental settings where a performant “tag routing” baseline can be designed, i.e. where we have oracle access to metadata that can be used to appropriately route examples. Specifically, we experiment with fine-tuning T5.1.1 Base (Raffel et al., 2020) on datasets from GLUE (Wang et al., 2018) (referred to as T5-GLUE) and fine-tuning a ResNet18 (He et al., 2016) on DomainNet (Peng et al., 2019) (ResNet-DomainNet). In these settings, we add experts to an existing pre-trained backbone in the same way that Adapters are used for parameter-efficient fine-tuning Houlsby et al. (2019). While past work has also considered using discrete routing among experts to train large-scale models from scratch (Fedus et al., 2021; Shazeer et al., 2017), we focus on modular fine-tuned models in this work and leave large-scale experiments for future work. **T5-GLUE** In this setting, we focus on training a T5 model (Raffel et al., 2020) on the GLUE meta-benchmark (Wang et al., 2018) for natural language understanding. We provide background on the GLUE dataset and the example format we use in appendix B.1. We follow the approach of Mahabadi et al. (2021) for splitting each GLUE dataset into train, eval, and test splits. Past work has demonstrated improved performance on RTE by co-training with MNLI (Phang et al., 2018; Devlin et al., 2018; Pruksachatkun et al., 2020; Vu et al., 2020; Choshen et al., 2022a), and we congruently found that sharing an expert between RTE and MNLI produced a stronger tag routing strategy. In the interest of making our baselines as strong as possible, we use this improved tag routing scheme in all experiments. We use the pretrained T5.1.1 Base model as the backbone and adapt the model in a way similar to adding adapters (Houlsby et al., 2019) for a single task, i.e. we keep all pretrained parameters frozen except for layer normalization parameters and insert expert routing blocks after self-attention, feed-forward and cross-attention modules. The T5 1.1 Base model has 12 Transformer layers in both the encoder and decoder, resulting in a total of $12 \times 2 = 24$ blocks in the encoder and $12 \times 3 = 36$ blocks in the decoder, or 60 expert routing blocks in total. In each block, we introduce eight experts (one for each dataset in GLUE). The router architecture is simply a linear classifier, i.e. a linear projection layer consisting of a weight matrix of shape $d \times N$, where $d$ is the model dimension and $N$ is the number of experts in the MoE layer, followed by a softmax nonlinearity. To help avoid saturating the softmax nonlinearity, we apply layer normalization both to the input of the router as well as the rows of the linear layer. In the encoder, the router takes as input the preceding hidden states, which are averaged across the sequence and fed into the router. In the decoder, the routers receive the average of the encoder’s final hidden states instead of the decoder hidden states to prevent information leakage from later target tokens. We also include expert dropout Liu et al. (2022b) where each expert is dropped with a probability of 0.1 wherever it was found to be beneficial (a detailed ablation can be found in table 1). In GLUE, dataset sizes vary by three orders of magnitude, and we therefore found that load-balancing losses (as used e.g. in (Shazeer et al., 2017; Fedus et al., 2021; Lepikhin et al., 2020) to encourage uniform usage across experts) tended to hurt performance, so we did not include them. **ResNet-DomainNet** In this setting, we focus on adapting an ImageNet pre-trained ResNet18 model (He et al., 2016) to datasets within DomainNet (Peng et al., 2019). DomainNet is a collection of object recognition datasets that cover six distinct domains and all share the same label space corresponding to 345 object categories. We treat the domain of each example as its tag. As in the T5-GLUE setting, we freeze the pretrained model and insert eight expert routing blocks after each of the eight residual layer groups in the model. Each block includes six experts corresponding to the number of domains. We use the same architecture for routers as in T5-GLUE and feed average-pooled hidden states into the router to compute the routing probability. Experts in this setting use batch normalization on their input instead of layer normalization in the output, following (Rebuffi et al., 2017). As in T5-GLUE, we omit load-balancing losses due to dramatically different sizes across domains in DomainNet. Full details of hyperparameters and training timings for each setting are presented in appendix B. **Results** To assess the overall effectiveness of routing strategies learned with SMEAR, we compare to learned routing using the gradient estimators, heuristic routing strategies, and modular baselines from section 2. A summary of our results is shown in fig. 2. First, we find that models using routing strategies learned through gradient estimation often underperform heuristic routing strategies – while the best-performing estimator (REINFORCE) in T5-GLUE outperforms tag routing, all estimators perform worse than tag routing in ResNet-DomainNet. On the other hand, we observed some cases... where gradient estimation-based routing outperforms hash or single-expert routing, which suggests that the learned routing strategies were nontrivial. Pertinently, in all experimental settings, SMEAR matches or outperforms every other routing strategy, including both routing learned by gradient estimators and all heuristic routing strategies. In particular, SMEAR achieves 2.7% improvement over tag routing in T5-GLUE and 0.6% improvement over tag routing in ResNet-DomainNet, suggesting effective specialization and sharing of experts. SMEAR additionally outperforms the single-expert parameter-matched baseline (1× parameters) by 1.4% in T5-GLUE and 1.2% in ResNet-DomainNet, further highlighting the importance of modularity. As an upper bound on performance, we also compare SMEAR to expert ensembling (“Ensemble”) which averages the outputs of all experts and incurs significantly higher computational cost. SMEAR matches the performance of ensemble routing in T5-GLUE and modestly underperforms it in ResNet-DomainNet, despite being significantly computationally cheaper. Compared to Adamix, which similarly averages experts but does not learn a routing strategy, SMEAR achieves 3.2% higher performance in T5-GLUE and 4% higher in ResNet-DomainNet. Since the Soft MoE method averages input tokens, it’s inapplicable for the encoder-decoder model in T5-GLUE, where future tokens are not available for averaging in the decoder during inference. Hence, we include Soft MoE for ResNet-DomainNet where SMEAR outperforms it by 1.5%. Additionally, SMEAR exceeds the DSelect-k method by 2.7% in ResNet-DomainNet. However, despite extensive hyperparameter tuning we encountered training instabilities with the DSelect-k method in T5-GLUE and therefore omit those results. Moreover, while the performance improvement of SMEAR over Latent Skills is relatively small (0.6% in T5-GLUE and 0.1% in ResNet-DomainNet), a major advantage of SMEAR over Latent Skills is that it does not assume access to oracle tags (which are not always available in real-world settings) and instead learns an adaptive routing strategy. Finally, we highlight the consistency of improvements achieved by SMEAR across a diverse range of datasets and architectures, confirming its generality and robustness. We additionally plot the inference speed (in terms of number of examples processed per second) of each method in fig. 2. The single-expert, Adamix, Hash, and Tag routing methods are the fastest since they do not use any routing networks. Despite the slight overhead of averaging the weights in SMEAR, we observe that its inference speed is almost identical to that of discrete adaptive routing (as learned via gradient estimation techniques). This confirms that the performance gains attained by SMEAR do not incur significant additional costs. Ensembling expert outputs is the slowest, with a 1.2× slowdown in T5-GLUE and 1.3× slowdown in ResNet-DomainNet compared to SMEAR. ![Router for Encoder/HHN 3](image1) (a) T5-GLUE model ![Router for Decoder/HHN 3](image2) (b) ResNet-DomainNet model Figure 3: Average routing distributions produced by SMEAR for two routers from the T5-GLUE model and two from the ResNet-DomainNet model. For a given router, we average all routing distributions across all examples from a given dataset. **Scaling** Thus far, we have always set the number of experts equal to the number of tasks (in T5-GLUE) or domains (in DomainNet). However, with learned routing there is no reason to force this constraint, so we therefore tested the scalability of SMEAR by evaluating its performance with twice as many experts (16 for T5-GLUE and 12 for ResNet-DomainNet). We found a significant improvement (0.8%) when doubling the number of experts on ResNet-DomainNet, but no significant change on T5-GLUE (81.3 ± 1.1 vs. 81.6 ± 1.1). This suggests there is no benefit to increasing capacity in the T5-GLUE setting. The complete results for doubling the number of experts are presented in appendix F (labeled “SMEAR 2×”). Qualitative Analysis In this section, we provide qualitative analysis of the routing learned by SMEAR by visualizing the average router distribution across all examples in a given dataset for every router in each model. Figure 3 shows four select visualizations (two from a SMEAR-based model trained in T5-GLUE and two from ResNet-DomainNet). Across the two T5-GLUE router distributions shown in fig. 3, we observe significantly different behavior – one mainly follows a tag routing-style strategy whereas the other routes most datasets to the same expert. However, we note that the tag-style router utilizes shared experts for RTE, MRPC, and MNLI; notably, these tasks are somewhat similar in that they all involve determining similarity among pairs of sentences. In the single-expert-style router, STS-B (the only regression task) and SST-2 (which has a distinct output vocabulary) are given dedicated experts, and MNLI (a large and relatively challenging dataset) is routed through many different experts. More broadly, we highlight that there is generally a great deal of sparsity in the learned routing distributions, suggesting a significant amount of expert specialization. In ResNet-DomainNet, we can see that examples from the Quickdraw domain are routed to two specific experts in both cases. Additionally, we observe that the router distribution of the Painting and Real domains are highly correlated. Other domains such as Clipart and Sketch seem to evenly use experts. Interestingly, there is less expert specialization in the ResNet-DomainNet model, suggesting that there may be more similarities between the individual domains in DomainNet compared to the tasks in GLUE. In general, other approaches for learning routing did not exhibit as intuitive or meaningful specialization and sharing. In ResNet-DomainNet, Top-$k$ demonstrates uniform routing in the initial layers but chooses a single expert in the last layer. REINFORCE, ST-Gumbel, and DSelect-$k$ tend to exhibit mostly degenerate single-expert routing. Interestingly, all these gradient estimators learn to assign a distinct expert for the Quickdraw dataset. However, this degree of specialization is insufficient for achieving superior performance scores. In T5-GLUE, these estimators display degenerate routing in some layers, while showing a tendency to share a few experts (approximately 3 out of 8) in other layers across tasks. Methods such as Latent Skills and Ensemble utilize most experts in the MoE layer (similar to SMEAR). Routing distribution visualizations for all methods and all layers can be found in appendix G. 5 RELATED WORK Models with Conditional Computation Various works have investigated ways learning discrete routing strategies. Deecke et al. (2020); Hazimeh et al. (2021); Dua et al. (2021) start training with most of the experts activated and gradually introduce sparsity. Kudugunta et al. (2021); Ponti et al. (2022); Ma et al. (2019); Gupta et al. (2022) group examples from the same task together and introduce task-specific parameters in the router. Other works avoid learned routing by hand-crafting heuristic routing strategies. Gururangan et al. (2021) built sparsely activated language models where different domains use separate experts and then weights the experts for new domains. Tang et al. (2022); Pfeiffer et al. (2022; 2020) assign experts based on task-related human knowledge. Our focus on settings where performant routing schemes can be hand-designed takes inspiration from this line of work. Because sparsely activated models disentangle computation and parameter count, significant effort has gone into leveraging conditional computation to create massive pre-trained models with a feasible computation cost (Fedus et al., 2022; Shazeer et al., 2017; Fedus et al., 2021; Du et al., 2022; Zoph et al., 2022; Yu et al., 2022). Many works explore different routing methods in this setting, with a major focus on balancing the load across experts (Lewis et al., 2021; Zhou et al., 2022; Kool et al., 2021; Roller et al., 2021). Another line of work aims to introduce ways to convert trained dense models into similar-sized sparse models with a lower computational footprint (Lee-Thorp & Ainslie, 2022; Zhang et al., 2022; Komatsuzaki et al., 2022). Previous studies have theoretically analyzed gradient estimators, focusing on the bias and variance of these gradients and suggesting enhancements through improved relaxation techniques Grathwohl et al. (2017), variance reduction via increased sampling Kool et al. (2019), and unbiased load balancing across experts Kool et al. (2021). The fundamental theoretical advantage of our method lies in its ability to enable exact gradient computation through standard backpropagation. Issues with Conditional Computation A great deal of past work has highlighted issues with models that use conditional computation. Clark et al. (2022) study the scaling laws of sparse language models and discovered a computational cutoff above which no additional benefits are observed. Relatedly, Du et al. (2022) observe worse results when further scaling up the number of experts. Chi et al. (2022) highlight that using the model’s activations as input to the router can cause the representations to “collapse”. Dai et al. (2022) demonstrate that learned routing decisions can fluctuate significantly over training. Mittal et al. (2022) create a set of simple and compositional data distributions and show that systems with modular architecture can not find the most performant solution when trained end-to-end. Ye et al. (2022) experiment with various designs for multi-task learning with task-level routing and find that the performance never surpasses simple multi-task baselines. We show a possibility to avoid these issues with a fully differentiable routing strategy that does not increase computational costs. Weight Averaging Methods Many prior works utilize parameter averaging for ensembling. Wortsman et al. (2022c); Ilharco et al. (2022) average the weights of a pre-trained and a fine-tuned model to improve performance on target tasks as well as robustness to distribution shift. Choshen et al. (2022b) similarly show that merging multiple models fine-tuned on different datasets can provide a better initialization than using the original pre-trained model for further fine-tuning on new unseen datasets. Yang et al. (2019); Zhang et al. (2021) compute convolution kernels by averaging weights of individual kernels. Since the convolution operation is linear, weight averaging and ensembling are mathematically equivalent. However, SMEAR performs averaging on non-linear and parameter efficient experts that, when trained alone, can match the performance of the fully fine-tuned model Houlshby et al. (2019). π-Tuning Wu et al. (2023) employs a set of existing task specific experts, retrieving the top $k$ experts for a downstream task and learns to interpolate among these experts for the downstream task. While π-Tuning enables transfer learning to a new downstream task by learning to interpolate, our focus is on developing a routing algorithm that learns how to share or specialize experts without using any metadata. Model averaging is also a common step in distributed optimization, where it is widely used in federated learning (McMahan et al., 2017) and has recently been used for distributed fine-tuning (Wortsman et al., 2022a), multi-domain training (Li et al., 2022), and multitask training (Don-Yehiya et al., 2022). There are also works that utilize different styles of merging instead of weight averaging of parameters, such as reweighting parameters in accordance with their approximate Fisher information (Matena & Raffel, 2021), aligning features by fitting a linear projection (Jin et al., 2022), and permuting columns to account for permutation symmetries (Ainsworth et al., 2022). 6 CONCLUSION In this work, we sought to address shortcomings of models with discrete routing among experts that can lead them to underperform heuristic non-learned routing. We hypothesized that these issues stem from the gradient estimation techniques required to propagate gradients through discrete routing decisions and therefore focused on designing an expert routing architecture that allows exact calculation of gradients. Our approach, called SMEAR, works by computing a weighted average of expert parameters where the weighting is set according to the output of a learned router. We compared the performance of models using SMEAR to discrete routing models that were trained via various gradient estimation techniques. In experimental settings covering different modalities and model architectures, we found that SMEAR outperformed all models with discrete routing as well as performant heuristic routing strategies. Notably, this performance boost comes with no increase in computational costs. SMEAR also matched or outperformed existing state-of-the-art methods for learning modular models through expert averaging while removing the requirement for oracle task labels. Through qualitative analysis, we further confirmed that the experts learned in a model using SMEAR specialize to different types of inputs and that the router learns a nontrivial strategy that exploits commonalities across different examples. In future work, we are interested in exploring different expert architectures (Liu et al., 2022a; Hu et al., 2021) and improved merging methods (Matena & Raffel, 2021; Ainsworth et al., 2022; Jin et al., 2022). Given access to a larger amount of compute, we would also be excited to try out SMEAR in the large-scale settings where discrete routing has been used (Fedus et al., 2021; Zoph et al., 2022; Du et al., 2022) to see whether it helps fix the poor scaling properties of models with discrete routing (Clark et al., 2022). REFERENCES Samuel K Ainsworth, Jonathan Hayase, and Siddhartha Srinivasa. Git re-basin: Merging models modulo permutation symmetries. arXiv preprint arXiv:2209.04836, 2022. Stephen H Bach, Victor Sanh, Zheng-Xin Yong, Albert Webson, Colin Raffel, Nihal V Nayak, Abbeesh Sharma, Taewoon Kim, M Saiful Bari, Thibault Fevry, et al. Promptsource: An integrated development environment and repository for natural language prompts. *arXiv preprint arXiv:2202.01279*, 2022. Hangbo Bao, Li Dong, and Furu Wei. Beit: Bert pre-training of image transformers. *arXiv preprint arXiv:2106.08254*, 2021. David Bau, Jun-Yan Zhu, Hendrik Strobelt, Agata Lapedriza, Bolei Zhou, and Antonio Torralba. Understanding the role of individual units in a deep neural network. *Proceedings of the National Academy of Sciences*, 117(48):30071–30078, 2020. Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. *arXiv preprint arXiv:1308.3432*, 2013. Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. The fifth pascal recognizing textual entailment challenge. In *TAC*, 2009. Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. *arXiv preprint arXiv:1708.00055*, 2017. Zewen Chi, Li Dong, Shaohan Huang, Damai Dai, Shuming Ma, Barun Patra, Saksham Singhal, Payal Bajaj, Xia Song, and Furu Wei. On the representation collapse of sparse mixture of experts. *arXiv preprint arXiv:2204.09179*, 2022. Leshem Choshen, Elad Venezian, Shachar Don-Yehia, Noam Slonim, and Yoav Katz. Where to start? analyzing the potential value of intermediate models. *arXiv preprint arXiv:2211.00107*, 2022a. Leshem Choshen, Elad Venezian, Noam Slonim, and Yoav Katz. Fusing finetuned models for better pretraining. *arXiv preprint arXiv:2204.03044*, 2022b. Aidan Clark, Diego de Las Casas, Aurelia Guy, Arthur Mensch, Michela Paganini, Jordan Hoffmann, Bogdan Damoc, Blake Hechtman, Trevor Cai, Sebastian Borgeaud, et al. Unified scaling laws for routed language models. In *International Conference on Machine Learning*, pp. 4057–4086. PMLR, 2022. Róbert Csordás, Sjoerd van Steenkiste, and Jürgen Schmidhuber. Are neural nets modular? inspecting functional modularity through differentiable weight masks. In *International Conference on Learning Representations*, 2021. Damai Dai, Li Dong, Shuming Ma, Bo Zheng, Zhifang Sui, Baobao Chang, and Furu Wei. Stablemoe: Stable routing strategy for mixture of experts. *arXiv preprint arXiv:2204.08396*, 2022. Nicola De Cao, Wilker Aziz, and Ivan Titov. Editing factual knowledge in language models. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, 2021. Lucas Deecke, Timothy Hospedales, and Hakan Bilen. Latent domain learning with dynamic residual adapters. *arXiv preprint arXiv:2006.00996*, 2020. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Bill Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In *Third International Workshop on Paraphrasing (IWP2005)*, 2005. Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, and Leshem Choshen. Cold fusion: Collaborative descent for distributed multitask finetuning. *arXiv preprint arXiv:2212.01378*, 2022. Nan Du, Yanping Huang, Andrew M Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, et al. Glam: Efficient scaling of language models with mixture-of-experts. In *International Conference on Machine Learning*, pp. 5547–5569. PMLR, 2022.
TW0MVSflg5
The authors use a 2D-CNN to obtain super-pixel features, meaning that the features for similarity computation are derived from a pixel region, not individual pixels. This approach seems misaligned with the NeRF setting, which relies only on per-pixel information instead of per-region data. Such a discrepancy makes the ray reliability estimation seem less accurate and contradicts the proposed method.
SELF-EVOLVING NEURAL RADIANCE FIELDS Anonymous authors Paper under double-blind review ABSTRACT Recently, neural radiance field (NeRF) has shown remarkable performance in novel view synthesis and 3D reconstruction. However, it still requires abundant high-quality images, limiting its applicability in real-world scenarios. To overcome this limitation, recent works have focused on training NeRF only with sparse viewpoints by giving additional regularizations, often called few-shot NeRF. We observe that due to the under-constrained nature of the task, solely using additional regularization is not enough to prevent the model from overfitting to sparse viewpoints. In this paper, we propose a novel framework, dubbed Self-Evolving Neural Radiance Fields (SE-NeRF), that applies a self-training framework to NeRF to address these problems. We formulate few-shot NeRF into a teacher-student framework to guide the network to learn a more robust representation of the scene by training the student with additional pseudo labels generated from the teacher. By distilling ray-level pseudo labels using distinct distillation schemes for reliable and unreliable rays obtained with our novel reliability estimation method, we enable NeRF to learn a more accurate and robust geometry of the 3D scene. We show and evaluate that applying our self-training framework to existing models improves the quality of the rendered images and achieves state-of-the-art performance in multiple settings. 1 INTRODUCTION Novel view synthesis that aims to generate novel views of a 3D scene from given images is one of the essential tasks in computer vision fields. Recently, neural radiance field (NeRF) (Mildenhall et al., 2021) has shown remarkable performance for this task, modeling highly detailed 3D geometry and specular effects solely from given image information. However, the requirement of abundant high-quality images with accurate poses restricts its application to real-world scenarios, as reducing the input views causes NeRF to produce broken geometry and undergo severe performance degradation. Numerous works (Kim et al., 2022; Jain et al., 2021; Wang et al., 2023; Niemeyer et al., 2022; Yu et al., 2021) tried to address this problem, known as few-shot NeRF, whose aim is to robustly optimize NeRF in scenarios where only a few and sparse input images are given. To compensate for the few-shot NeRF’s under-constrained nature, they either utilize the prior knowledge of a pre-trained model (Jain et al., 2021; Yu et al., 2021) such as CLIP (Radford et al., 2021) or 2D CNN (Yu et al., 2021) or introduce an additional regularization (Niemeyer et al., 2022; Kim et al., 2022; Kwak et al., 2023), showing compelling results. However, these works show limited success in addressing the fundamental issue of overfitting as NeRF tends to memorize the input known viewpoints instead of understanding the geometry of the scene. In our toy experiment, this behavior is clearly shown in Figure 1, where existing methods (even with regularization (Fridovich-Keil et al., 2023; Niemeyer et al., 2022; Kim et al., 2022)) trained with 3-views show a noticeable drop in PSNR even with slight changes of viewpoints. Utilizing additional ground truth data for viewpoints that were unknown to the few-shot setting, we compare the rendered images from few-shot NeRF with the ground truth images and verify that there are accurately modeled regions even in unknown viewpoints that are far from known ones. This indicates that if we can accurately identify reliable regions, the rendered regions can be utilized as additional data achieved with no extra cost. Based on these facts, we formulate the few-shot NeRF task into the self-training framework by considering the rendered images as pseudo labels and training a new NeRF network with confident pseudo labels as additional data. Figure 1: Toy experiment to verify the robustness of models trained with sparse views. (Left) The red camera (a) indicates the camera position used for training and cameras from (b-e) are used to verify the robustness of models when the novel viewpoint gets further from the known viewpoint. (Middle) For each viewpoint (a-e), we visualize the rendered images by RegNeRF (Niemeyer et al., 2022), baseline ($K$-Planes (Fridovich-Keil et al., 2023)), and SE-NeRF from top to bottom rows. (Right) Starting from viewpoint (a), we show the PSNR graph of the rendered images as the viewpoint moves gradually from (a-e). Existing models show extreme PSNR drops, even with slight movements. Expanding upon this idea, we introduce a novel framework, dubbed Self-Evolving Neural Radiance Fields (SE-NeRF), which enables a more robust training of few-shot NeRF in a self-supervised manner. We train the few-shot NeRF under an iterative teacher-student framework, in which pseudo labels for geometry and appearance generated by the teacher NeRF are distilled to the student NeRF, and the trained student serves as the teacher network in the next iteration for progressive improvement. To estimate the reliability of the pseudo labels, we utilize the semantic features of a pre-trained 2D CNN to measure the consistency of the pseudo labels within multiple viewpoints. We also apply distinct distillation schemes for reliable and unreliable rays, in which reliable ray labels are directly distilled to the student, while unreliable rays undergo a regularization process to distill more robust geometry. Our experimental results show that our framework successfully guides existing NeRF models towards a more robust geometry of the 3D scene in the few-shot NeRF setting without using any external 3D priors or generative models (Xu et al., 2022). Also, we show the versatility of our framework, which can be applied to any existing models without changing their structure. We evaluate our approach on synthetic and real-life datasets, achieving state-of-the-art results in multiple settings. 2 RELATED WORK Neural radiance fields (NeRF). Synthesizing images from novel views of a 3D scene given multi-view images is a long-standing goal of computer vision. Recently, neural radiance fields (NeRF) (Mildenhall et al., 2021) has achieved great success by optimizing a single MLP that learns to estimate the radiance of the queried coordinates. The MLP learns the density $\sigma \in \mathbb{R}$ and color $c \in \mathbb{R}^3$ of continuous coordinates $x \in \mathbb{R}^3$, and is further utilized to explicitly render the volume of the scene using ray marching (Kajiya & Von Herzen, 1984). Due to its impressive performance in modeling the 3D scene, various follow-ups (Deng et al., 2022; Jain et al., 2021; Kim et al., 2022; Fridovich-Keil et al., 2023; Niemeyer et al., 2022; Wang et al., 2023; Roessle et al., 2022; Yang et al., 2023) adopted NeRF as their baseline model to solve various 3D tasks. Few-shot NeRF. Although capable of successfully modeling 3D scenes, NeRF requires abundant high-quality images with accurate poses, making it hard to apply in real-world scenarios. Several methods have paved the way to circumvent these issues by showing that the network can be successfully trained even when the input images are limited. One approach addresses the problem using prior knowledge from pre-trained local CNNs (Yu et al., 2021; Chibane et al., 2021; Kwak et al., 2023). PixelNeRF (Yu et al., 2021), for instance, employs a NeRF conditioned with features extracted by a pre-trained encoder. Another line of research introduces a geometric or depth-based regularization to the network (Jain et al., 2021; Kim et al., 2022; Niemeyer et al., 2022; Deng et al., 2022). DietNeRF (Jain et al., 2021) proposes an auxiliary semantic consistency loss to encourage realistic renderings at novel poses. RegNeRF (Niemeyer et al., 2022) regularizes the geometry and appearance of patches rendered from unobserved viewpoints. DS-NeRF (Deng et al., 2022) introduces additional depth supervision from sparse point clouds obtained in the COLMAP (Schonberger & Frahm, 2016) process. Self-training. Self-training is one of the earliest semi-supervised learning methods (Fralick, 1967; Scudder, 1965) mainly used in settings where obtaining sufficient labels is expensive (e.g., Instance segmentation). Self-training exploits the unlabeled data by pseudo labeling with a teacher model, which is then combined with the labeled data and used in the student training process. Noisy student (Xie et al., 2020) succeeds in continually training a better student by initializing a larger model as the student, and injecting noise into the data and network. Meta pseudo labels (Pham et al., 2021), on the other hand, optimizes the teacher model by evaluating the student’s performance on labeled data, guiding the teacher to generate better pseudo labels. We bring self-training to NeRFs by formulating the few-shot NeRF task as a semi-supervised learning task. Our approach can be seen as an analogous method of noisy student (Xie et al., 2020) that exploits NeRF as the teacher and student model, with teacher-generated unknown views as the unlabeled data. 3 PRELIMINARIES AND MOTIVATION 3.1 Preliminaries Given a set of training images \( S = \{ I_i | i \in \{1, \ldots, N\} \} \), NeRF (Mildenhall et al., 2021) represents the scene as a continuous function \( f(\cdot; \theta) \), a neural network with parameters \( \theta \). The network renders images by querying the 3D points \( x \in \mathbb{R}^3 \) and view direction \( d \in \mathbb{R}^2 \) transformed by a positional encoding \( \gamma(\cdot) \) to output a color value \( c \in \mathbb{R}^3 \) and a density value \( \sigma \in \mathbb{R} \) such that \( \{c, \sigma\} = f(\gamma(x), \gamma(d); \theta) \). The positional encoding transforms the inputs into Fourier features (Tancik et al., 2020) that facilitate learning high-frequency details. Given a ray parameterized as \( r(t) = o + td \), starting from camera center \( o \) along the direction \( d \), the expected color value \( C(r; \theta) \) along the ray \( r(t) \) from \( t_n \) to \( t_f \) is rendered as follows: \[ C(r; \theta) = \int_{t_n}^{t_f} T(t)\sigma(r(t); \theta)c(r(t), d; \theta)dt, \quad T(t) = \exp \left( -\int_{t_n}^{t} \sigma(r(s); \theta)ds \right), \] where \( T(t) \) denotes the accumulated transmittance along the ray from \( t_n \) to \( t \). To optimize the network \( f(\cdot; \theta) \), the photometric loss \( L_{\text{photo}}(\theta) \) enforces the rendered pixel color value \( C(r; \theta) \) to be consistent with the ground-truth pixel color value \( C_{gt}(r) \): \[ L_{\text{photo}}(\theta) = \sum_{r \in R} \|C_{gt}(r) - C(r; \theta)\|_2^2, \] where \( R \) is the set of rays corresponding to each pixel in the image set \( S \). 3.2 Motivation Despite its impressive performance, NeRF has the critical drawback of requiring large amounts of posed input images \( S \) for robust scene reconstruction. Naively optimizing NeRF in a few-shot setting (e.g., \( |S| < 10 \)) results in NeRF producing erroneous artifacts and undergoing major breakdowns in the geometry due to the task’s under-constrained nature (Niemeyer et al., 2022; Kim et al., 2022). A closer look reveals important details regarding the nature of the few-shot NeRF optimization. As described by the PSNR graph in Figure 1, all existing methods show a noticeable PSNR drop even with slight viewpoint changes, which indicates the tendency of NeRF to memorize the given input views. Such a tendency results in broken geometry that looks perfect in known viewpoints but progressively degenerates as the rendering view gets further away from known views. Although training with additional data directly solves this problem, obtaining high-quality images with accurate poses is extremely expensive. Instead, we notice that although images (rendered from NeRF trained with only sparse viewpoints) contain artifacts and erroneous geometry, there are reliable pixels of the image that are close to the corresponding ground truth pixels, which can be used as additional data. Figure 2: Illustration of our overall framework for applying self-training to NeRF. SE-NeRF utilizes the self-training framework to distill the knowledge of learned appearance and 3D geometry from teacher to student. The process is done iteratively as the student becomes the new teacher. To check the feasibility that using reliable pixels from the rendered images as additional data can help prevent NeRF from overfitting, we conduct an experiment of first optimizing NeRF under the identical few-shot setting. After training a teacher NeRF with three images, we train a new student NeRF with the extended set of images $S \cup S^+$ where $S^+$ is the set of rendered images. To train with only the reliable pixels of $S^+$, we define a binary reliability mask $M(r)$, which masks out pixels where the difference between the rendered color value $C(r; \theta^T)$ and its ground truth color value $C_{gt}(r)$ is above a predetermined threshold. Training the student NeRF network to follow the reliably rendered color values $\{C(r; \theta^T) | M(r) = 1\}$ of the teacher can be seen as a weak distillation from the teacher to the student. The new student NeRF is trained with the following loss function: $$L_{photo}(\theta) + \lambda \sum_{r \in R^+} M(r)\|C(r; \theta^T) - C(r; \theta)\|^2_2,$$ where $R^+$ is a set of rays corresponding to each pixel in the rendered image set $S^+$, and $\lambda$ denotes the weight parameter. The result of this experiment, described in "GT Masked" of the PSNR graph in Figure 1 shows that the student trained with K-Planes (Fridovich-Keil et al., 2023) as the baseline, displays staggering improvement in performance, with unknown viewpoints showing higher PSNR values and their rendered geometry remaining highly robust and coherent. This leads us to deduce that a major cause of few-shot NeRF geometry breakdown is its tendency to memorize the given sparse viewpoints and that selected distillation of additional reliable rays is crucial to enhance the robustness and coherence of 3D geometry. Based on this observation, our concern now moves on to how to estimate the reliability mask $M$ for the rendered novel images of $S^+$ to develop a better few-shot NeRF model. 4 METHOD 4.1 TEACHER-STUDENT FRAMEWORK Teacher network optimization. A teacher network is trained naively by optimizing the standard NeRF photometric loss where the number of known viewpoints is $|S| < 10$. During this process, NeRF recovers accurate geometry for certain regions and inaccurate, broken geometry in other regions. The parameters of teacher network $\theta^T$ is optimized as the following equation: $$\theta^T = \arg\min_\theta L_{photo}(\theta).$$ Pseudo labeling with teacher network. By evaluating the optimized teacher NeRF representation $\theta^T$, we can generate per-ray pseudo labels $\{C(r; \theta^T) | r \in R^+\}$ from the rendered images $S^+$ from unknown viewpoints. To accurately identify and distill the reliable regions of $S^+$ to the student model, we assess the reliability of every pseudo label in $R^+$ to acquire a reliability mask $M(r)$ using a novel reliability estimation method we describe in detail in Section 4.2. Student network optimization. The student network $\theta^S$ is then trained with the extended training set of $S \cup S^+$, with the reliability mask $M$ taken into account. In addition to the photometric loss with the initial image set $S$, the student network is also optimized with a distillation loss that encourages it to follow the robustly reconstructed parts of the teacher model in $S^+$. In the distillation process, the estimated reliability mask $M$ determines how each ray should be distilled, a process which we explain further in Section 4.3. In summary, student network $\theta^S$ is optimized by the following equation: $$\theta^S = \arg\min_{\theta} \left\{ L_{\text{photo}}(\theta) + \lambda \sum_{r \in R^+} M(r) \| C(r; \theta^T) - C(r; \theta) \|_2^2 \right\},$$ where $C(r; \theta^T)$ and $C(r; \theta)$ is the rendered color of the teacher and student model, respectively and $\lambda$ denotes the weight parameter. Iterative labeling and training. After the student network is fully optimized, the trained student network becomes the teacher network of the next iteration for another distillation process to a newly initialized NeRF, as described in Figure 2. We achieve improvement of the NeRF’s quality and robustness every iteration with the help of the continuously extended dataset. 4.2 Ray Reliability Estimation To estimate the reliability of per-ray pseudo labels $\{C(r; \theta^T)\mid r \in R^+\}$ from the rendered images $S^+$, we expand upon an important insight that if a ray has accurately recovered a surface location and this location is projected to multiple viewpoints, the semantics of the projected locations should be consistent except for occlusions between viewpoints. This idea has been used in previous works that formulate NeRF for refined surface reconstruction (Chibane et al., 2021), but our work is the first to leverage it for explicitly modeling ray reliability in a self-training setting. The surface location recovered by a ray $r$ corresponding to pixel $p_i$ of the viewpoint $i$ can be projected to another viewpoint $j$ with the extrinsic matrix $R_{i \rightarrow j}$, intrinsic matrix $K$, and the estimated depth $D_i$ from viewpoint $i$ with the following projection equation: $$p_{i \rightarrow j} \sim KR_{i \rightarrow j}D_i(r)K^{-1}p_i.$$ Using the projection equation, we can make corresponding pixel pairs between viewpoint $i$ and $j$ such as $(p_i, p_j)$ where $p_j = p_{i \rightarrow j}$. Similarly, if we acquire pixel-level feature maps from viewpoint $i$ and $j$ using a pre-trained 2D CNN, we can make corresponding feature pairs as $(f^i_p, f^j_p)$. In our case, by projecting the feature vector of the corresponding pseudo label $\{C(r; \theta^T)\mid r \in R^+\}$ to all given input viewpoints, we can achieve $|S|$ feature pairs for every pseudo label. To generate a reliability mask for each ray, if a ray has at least one feature pair whose similarity value is higher than the threshold value $\tau$, it indicates that the feature consistency of the ray’s rendered geometry has been confirmed and classify such rays as reliable. Summarized in equation, the binary reliability mask $M(r)$ for the ray $r$ rendered from viewpoint $i$ can be defined as follows: $$M(r) = \min \left\{ \sum_{j \in |S|} \frac{1}{\| f^i_p \| \| f^j_p \|} > \tau \right\}, 1 \right\}.$$ To prevent the unreliable rays from being misclassified as reliable, we must carefully choose the threshold $\tau$. Although using a fixed value for the $\tau$ is straightforward, we find that choosing the adequate value is extremely cumbersome as the similarity distribution for each scene varies greatly. Instead, we adopt the adaptive thresholding method, which chooses the threshold by calculating the $(1 - \alpha)^{th}$ percentile of the similarity distribution where $\alpha$ is a hyperparameter in the range $\alpha \in [0, 1]$. This enables the threshold $\tau$ to be dynamically adjusted to each scene, leading to a better classification of the reliable rays. 4.3 Reliability-based Distillation To guide the student network to learn a more robust representation of the scene, we distill the label information from the teacher to the student with two distinct losses based on the ray’s reliability. By remembering the rays evaluated in the teacher network and re-evaluating the same rays in the student network, the geometry and color information of reliable rays is directly distilled into the student network through distillation loss, while the rays classified as unreliable are regularized with nearby reliable rays for improved geometry before applying the distillation loss. Figure 3: Distillation of pseudo labels. After estimating the reliability of the rays from unknown views, we apply distinct distillation schemes for reliable and unreliable rays. Reliable rays are directly distilled to the student while we aggregate the nearby reliable rays to regularize the unreliable rays. Reliable ray distillation. Since we assume the reliable rays’ appearance and geometry have been accurately predicted by the teacher network, we directly distill their rendered color so that the student network faithfully follows the outputs of the teacher for these reliable rays. With the teacher-generated per-ray pseudo labels \( \{C(r; \theta^T) | r \in R^+\} \) from the rendered images \( S^+ \) and the estimated reliability mask \( M \), the appearance of a reliable ray is distilled by the reformulated photometric loss \( L_c^R \): \[ L_c^R(\theta) = \sum_{r \in R^+} M(r) \| C(r; \theta^T) - C(r; \theta) \|_2^2. \] In addition to the photometric loss \( L_c^R \), we follow Deng et al., (2022); Roessle et al., (2022) of giving the depth-supervision together to NeRF. As the teacher network \( \theta^T \) also outputs the density \( \sigma(r; \theta^T) \) for each of the rays, we distill the density weights of the sampled points of the reliable rays to the student network. Within the same ray, we select an identical number of points randomly sampled from evenly spaced bins along the ray. This allows us to follow the advantages of injecting noise to the student as in Xie et al., (2020) as randomly sampling points from each bin induces each corresponding point to have slightly different positions, which acts as an additional noise to the student. The density distillation is formulated by the geometry distillation loss \( L_g^R \), which is L2 loss between accumulated density values of corresponding points within the teacher and student rays, with teacher rays’ density values \( \sigma^T \) serving as the pseudo ground truth labels. Therefore, for reliable rays, our distillation loss along the camera ray \( r(t) = o + td \) is defined as follows: \[ L_g^R(\theta) = \sum_{r \in R^+} \sum_{t,t' \in T} M(r) \| \sigma(r(t); \theta^T) - \sigma(r(t'); \theta) \|_2^2. \] where \( T \) refers to the evenly spaced bins from \( t_n \) to \( t_f \) along the ray, \( t \) and \( t' \) indicate randomly selected points from each bins. Unreliable ray distillation. In traditional semi-supervised methods, unreliable labels are ignored to prevent the confirmation bias problem. Similarly, unreliable rays must not be directly distilled as they are assumed to have captured inaccurate geometry. However, stemming from the prior knowledge that depth changes smoothly above the surface, we propose a novel method for regularizing the unreliable rays with geometric priors of nearby reliable rays, dubbed prior-based distillation. To distill the knowledge of nearby reliable rays, we calculate a weighted average of nearby reliable rays’ density distribution and distill this density to the student. As described in Figure 3, we apply a Gaussian mask to unreliable ray \( r \) to calculate per-ray weights for nearby reliable rays. The intuition behind this design choice is straightforward: the closer a ray is to an unreliable ray, the more likely it is to be that the geometry of the two rays will be similar. Based on these facts, we apply the prior-based geometry distillation loss \( L_g^P \), which is the L2 loss between the weighted-average density \( \tilde{\sigma}(r; \theta^T) \) and the student density outputs \( \sigma(r; \theta) \), is described in the following equation: \[ L_g^P(\theta) = \sum_{r \in R^+} \sum_{t,t' \in T} (1 - M(r)) \| \tilde{\sigma}(r(t); \theta^T) - \sigma(r(t'); \theta) \|_2^2. \] We apply the prior-based geometry distillation loss to the unreliable rays only when adjacent reliable rays exist. A more detailed explanation can be found in Appendix B.3. Table 1: Quantitative comparison on NeRF Synthetic and LLFF. | Methods | NeRF Synthetic Extreme | NeRF Synthetic | LLFF | |---------------|------------------------|----------------|------| | | PSNR↑ SSIM↑ LPIPS↓ Avg ↓ | PSNR↑ SSIM↑ LPIPS↓ Avg ↓ | PSNR↑ SSIM↑ LPIPS↓ Avg ↓ | | NeRF | 14.85 0.73 0.32 0.27 | 19.38 0.82 0.17 0.20 | 17.50 0.50 0.47 0.40 | | K-Planes | 15.45 0.73 0.28 0.28 | 17.99 0.82 0.18 0.21 | 15.77 0.44 0.46 0.41 | | DietNeRF | 14.46 0.72 0.28 0.28 | 15.42 0.73 0.21 0.20 | 14.94 0.37 0.50 0.44 | | InfoNeRF | 14.62 0.74 0.26 0.27 | 18.44 0.80 0.22 0.12 | 13.57 0.33 0.58 0.48 | | RegNeRF | 13.73 0.70 0.30 0.30 | 13.71 0.79 0.35 0.21 | 19.08 0.59 0.34 0.15 | | SE−NeRF (NeRF)| 17.41 0.78 0.21 0.22 | 20.53 0.84 0.16 0.19 | 18.10 0.54 0.45 0.38 | | | (+2.56) (+0.05) (-0.11) (-0.05) | (+1.15) (+0.02) (-0.01) (-0.01) | (+6.60) (+0.04) (-0.02) (-0.02) | | SE−NeRF (K−Planes) | 17.40* 0.78* 0.23* 0.25* | 17.93* 0.83* 0.17* 0.26* | 16.36* 0.49* 0.44* 0.59* | | | (+2.04) (+0.05) (-0.05) (-0.04) | (+1.94) (+0.01) (-0.01) (-0.01) | (+0.53) (+0.05) (-0.02) (-0.02) | Total distillation loss. Finally, our entire distillation loss can be formulated as follows: $$\theta^S = \arg\min_\theta \{L_{\text{photo}}(\theta) + \lambda_c^R L_c^R(\theta) + \lambda_g^R L_g^R(\theta) + \lambda_g^P L_g^P(\theta)\},$$ where $\lambda_c^R$, $\lambda_g^R$, and $\lambda_g^P$ denotes the weight parameters. Figure 4: Qualitative comparison on NeRF Synthetic Extreme. The results show the rendered images from viewpoints far away from the seen views. A noticeable improvement over existing models regarding artifacts and distortion removal can be observed in SE−NeRF. 5 EXPERIMENTS 5.1 Setups Datasets and metrics. We evaluate our methods on NeRF Synthetic [Mildenhall et al., 2021] and LLFF dataset [Mildenhall et al., 2019]. For the NeRF Synthetic dataset, we randomly select 4 views in the train set and use 200 images in the test set for evaluation. For LLFF, we chose every 8-th image as the held-out test set and randomly select 3 views for training from the remaining images. In addition, we find that all existing NeRF models’ performance on the NeRF Synthetic dataset is largely affected by the randomly selected views. To explore the robustness of our framework and existing methods, we introduce a novel evaluation protocol of training every method with an extreme 3-view setting (NeRF Synthetic Extreme) where all the views are selected from one side of the scene. The selected views can be found in Appendix C. We report PSNR, SSIM [Wang et al., 2004], LPIPS [Zhang et al., 2018] and geometric average [Barron et al., 2021] values for qualitative comparison. Implementation details. Although any NeRF representation is viable, we adopt $K$-Planes [Fridovich-Keil et al., 2023] as our main baseline to leverage its memory and time efficiency. Also, we conduct experiments using our framework with NeRF [Mildenhall et al., 2021] and Instant-NGP [Müller et al., 2022] to demonstrate the applicability of our framework. For our reliability estimation method, we use VGGNet [Simonyan & Zisserman, 2014], specifically VGG-19, and utilize the first 4 feature layers located before the pooling layers. We train $K$-Planes for 20 minutes on NeRF Synthetic and 60 minutes on LLFF using a single RTX 3090, and NeRF is trained for 90 minutes on NeRF Synthetic and 120 minutes on LLFF using 4 RTX 3090 GPUs for each iteration. 1For Instant-NGP, we train the model for 5 minutes on NeRF Synthetic Extreme. Hyper-parameters. We set the adaptive threshold value at $\alpha = 0.15$ for the first iteration. To enable the network to benefit from more reliable rays for each subsequent iteration, we employ a curriculum labeling approach that increases $\alpha$ by 0.05 every iteration. As images rendered from views near the initial inputs include more reliable regions, we progressively increase the range of where the pseudo labels should be generated. We start by selecting views that are inside the range of 10 degrees in terms of $\phi, \theta$ of the initial input and increase range after iterations. For the weights for our total distillation loss, we use $\lambda_c^R = 1.0$, $\lambda_g^R = 1.0$, and $\lambda_g^P = 0.005$. Table 2: Quantitative comparison per-scene on NeRF Synthetic Extreme. | Methods | chair | drums | focus | hotdog | lego | maten | ship | mic | |--------------------------|-------|-------|-------|--------|------|-------|------|-----| | NeRF | 15.08 | 11.98 | 17.16 | 13.83 | 16.31| 17.31 | 10.84| 16.29| | K-Planes | 15.61 | 13.23 | 18.29 | 12.45 | 14.67| 16.30 | 13.35| 19.74| | Instant-NGP | 17.66 | 12.75 | 18.44 | 13.67 | 13.17| 16.83 | 13.82| 19.05| | DietNeRF | 16.60 | 8.09 | 18.32 | 19.00 | 11.45| 16.97 | 15.26| 10.01| | InfoNeRF | 15.38 | 12.48 | 18.59 | 19.04 | 12.27| 15.25 | 7.23 | 16.76| | RegNeRF | 15.92 | 12.09 | 14.83 | 14.06 | 14.86| 10.53 | 11.44| 16.12| | SE-NeRF (NeRF) | 19.96 | 14.72 | 19.29 | 16.06 | 16.45| 17.51 | 14.20| 21.09| | | (+4.88)| (+2.74)| (+2.13)| (+2.23)| (+0.14)| (+0.20)| (+3.36)| (+4.80)| | SE-NeRF (K-Planes) | 20.54 | 13.38 | 18.33 | 20.14 | 16.65| 17.01 | 13.72| 20.13| | | (+4.93)| (+0.15)| (+0.04)| (+7.69)| (+1.98)| (+0.71)| (+0.37)| (+0.39)| | SE-NeRF (Instant-NGP) | 20.46 | 13.34 | 19.07 | 18.15 | 15.99| 17.94 | 14.61| 20.23| | | (+2.74)| (+0.59)| (+0.63)| (+4.48)| (+2.82)| (+1.11)| (+0.79)| (+1.18)| | SE-NeRF (DietNeRF) | 20.46 | 13.34 | 19.07 | 18.15 | 15.99| 17.94 | 14.61| 20.23| 5.2 Comparison Qualitative comparison. Figure 4 and Figure 5 illustrate the robustness of our model to unknown views, even when the pose differs significantly from the training views. Our model demonstrates robust performance on unknown data, surpassing the baselines. This is particularly evident in the "chair" scene, where all existing methods exhibit severe overfitting to the training views, resulting in heavy artifacts when the pose significantly changes from those used during training. RegNeRF fails to capture the shape and geometry in unknown views and although DietNeRF is capable of capturing the shape of the object accurately, it produces incorrect information, such as transforming the armrests of the chair into wood. In contrast, SE-NeRF maintains the shape of an object even from further views with less distortion, resulting in the least artifacts and misrepresentation. Quantitative comparison. Table 1 and Table 2 show quantitative comparisons of applying our framework against other few-shot NeRFs and our baseline models on NeRF synthetic and LLFF datasets. As shown in Table 1, SE-NeRF outperforms previous few-shot NeRF models in the NeRF synthetic Extreme and the conventional 4-view setting. By applying SE-NeRF, we observe an general improvement in performance over different methods and different datasets, demonstrating that our framework successfully guides networks of existing methods to learn more robust knowledge of the 3D scene. 5.3 Ablation study. Iterative training. As shown in Figure 6, which presents the quantitative results for each iteration, a significant improvement in performance can be observed after the first iteration. The performance continues to be boosted with each subsequent iteration until the convergence. Based on our experimental analysis, we find that after the simultaneous distillation of reliable rays and regularization of unreliable rays in the first iteration, there is much less additional knowledge to distill to the student in certain scenes which leads to a smaller performance gain from the second iteration. However, although the performance gain in terms of metrics is small, the remaining artifacts and noise in the images continue to disappear after the first iteration, which is important in perceptual image quality. **Prior-based ray distillation.** In Table 3, we conduct an ablation study on the "lego" scene of the NeRF Synthetic Extreme setting and show that using both reliable and unreliable ray distillation is crucial to guide the network to learn a more robust representation of the scene, showing the highest results in all metrics. This stands in contrast to existing semi-supervised approaches (Xie et al., 2020; Amini et al., 2023), which typically discard unreliable pseudo labels to prevent the student learning from erroneous information (Arazo et al., 2020). We show that when applying self-training to NeRF, the unreliable labels can be further facilitated by the prior knowledge that depth within a 3D space exhibits smoothness. **Thresholding.** In Table 4, we show the results of SE-NeRF trained on the NeRF Synthetic Extreme setting with different thresholding strategies. Following traditional semi-supervised approaches (Tur et al., 2005; Cascante-Bonilla et al., 2021; Zhang et al., 2021a; Chen et al., 2023), we conducted experiments using a predefined fixed threshold, adaptive threshold (ours), and a unified threshold which does not classify pseudo labels as reliable and unreliable but uses the similarity value to decide how much the distillation should be made from the teacher to the student. The adaptive thresholding method resulted in the most performance gain, showing the rationale of our design choice. A comprehensive and detailed analysis regarding the threshold selection process is provided in Appendix B.4. ### Table 3: Ray distillation ablation. | Method | PSNR ↑ | SSIM ↑ | LPIPS ↓ | Average ↓ | |-------------------------|--------|--------|---------|-----------| | K-Planes | 14.67 | 0.68 | 0.31 | 0.30 | | K-Planes + Reliable | 16.15 (+1.48) | 0.72 (+0.04) | 0.27 (-0.04) | 0.27 (-0.03) | | K-Planes + Reliable/Unreliable | 16.65 (+1.98) | 0.75 (+0.07) | 0.24 (-0.07) | 0.25 (-0.05) | ### Table 4: Thresholding ablation. | Threshold | PSNR ↑ | SSIM ↑ | LPIPS ↓ | Avg. ↓ | |-----------|--------|--------|---------|--------| | Fixed | 17.02 | 0.77 | 0.25 | 0.25 | | Unified | 15.95 | 0.73 | 0.28 | 0.27 | | Adaptive | 17.49 | 0.78 | 0.23 | 0.24 | ## 6 Conclusion And Limitations In this paper, we present a novel self-training framework Self-Evolving Neural Radiance Fields (SE-NeRF), specifically designed for few-shot NeRF. By employing a teacher-student framework in conjunction with our unique implicit distillation method, which is based on the estimation of ray reliability through feature consistency, we demonstrate that our self-training approach yields a substantial improvement in performance without the need for any 3D priors or modifications to the original architecture. Our approach is able to achieve state-of-the-art results on multiple settings and shows promise for further development in the field of few-shot NeRF. However, our framework also shares similar limitations to existing semi-supervised approaches. 1) Sensitivity to inappropriate pseudo labels: when unreliable labels are classified as reliable and used to train the student network, this leads to performance degradation of the student model. 2) Teacher initialization: if the initialized teacher network in the first iteration is too poor, our framework fails to enhance the performance of the models even after several iterations. Even with these limitations, our framework works robustly in most situations, and we leave the current limitations as future work. 7 REPRODUCIBILITY STATEMENT For the reproducibility of our work, we will release all the source codes and checkpoints used in our experiments. For those who want to try applying our self-training framework to existing works, we provide the pseudo codes for our reliability estimation method for the per-ray pseudo labels and the overall self-training pipeline. Algorithm 1 Reliability estimation method for per-ray pseudo labels 1: **Input:** Labeled Image $I$, rendered Image $I^+$, rendered depth $D^+$, threshold $\tau$ 2: **Output:** Mask $M$ for $I^+$ 3: $f \leftarrow \text{VGG19}(I)$ 4: $f^+ \leftarrow \text{VGG19}(I^+)$ 5: for $i \leftarrow 0$ to (Height - 1) do 6: for $j \leftarrow 0$ to (Width - 1) do 7: $(i', j') \leftarrow \text{Warp}(I^+, D^+, I, i, j)$ ▷ $I_{i,j}$ is warped to $I_{i',j'}$ using rendered depth $D^+$ 8: $S \leftarrow \text{CosineSimilarity}(f_{i,j}, f_{i',j'})$ 9: if $S > \tau$ then 10: $M_{i,j} \leftarrow 1$ 11: else 12: $M_{i,j} \leftarrow 0$ 13: end if 14: end for 15: end for Algorithm 2 Self-Training 1: **Input:** Teacher Network $T$, set of labeled ray $R$, set of rendered ray $R^+$ 2: **Output:** Teacher Network $T$ for next iteration 3: for each step do 4: Initialize $S$ ▷ Initialize Student Network 5: Loss $\leftarrow 0$ 6: for each $r$ in $R$ do 7: Loss $\leftarrow$ Loss + L2($c$, Color($S$, $r$)) 8: end for 9: for each $r$ in $R^+$ do 10: Evaluate $M(r)$ 11: if $M(r) = 1$ then 12: Loss $\leftarrow$ Loss + L2(Color($T$, $r$), Color($S$, $r$)) ▷ Reliable RGB Loss 13: Loss $\leftarrow$ Loss + L2(Weight($T$, $r$), Weight($S$, $r$)) ▷ Reliable Density Loss 14: else 15: Loss $\leftarrow$ Loss + L2(GaussianWeight($T$, $r$), Weight($S$, $r$)) ▷ Unreliable Density Loss 16: end if 17: Update $T$ with Loss 18: end for 19: end for 20: $T \leftarrow S$
8Itp6Axs9Z
More importantly, the experimental results do not seem to support the central thesis of the paper, namely that the SelfDreamer technique is beneficial specifically in the frame-masked setting: * SelfDreamer performs better than the alternatives from the very beginning (without frame-masking). * In the experiment with 2x frame masking, none of the methods (both SelfDreamer and baselines) seem to significantly degrade in performance. (With the exception of the WalkerRun task, for which Dreamer and TMC degrade in performance. In any case, DreamerPro does not degrade in performance.) * With 3x masking, we have a small drop in performance across the board for all methods, including SelfDreamer. (Other methods have confidence intervals over performance too big to precisely quantify the drop in performance and compare it to SelfDreamer. Moreover, the drop in performance for Walker Run behaves similarly to the 2x masking setting, in which performance drops greatly for Dreamer and TMC, but not as much for DreamerPro. SelfDreamer beats DreamerPro, as it beat DreamerPro by a similar margin even before any frame masking) * The 4x and 5x masking results from the appendix further reinforce the above point, that SelfDreamer does not seem to offer a significant relative resilience to masking compared to DreamerPro. Overall, it seems that SelfDreamer is indeed a useful modification to Dreamer. What it seems not to be is a method that should be viewed specifically from the lens of improving performance when doing frame-masking. It seems that the key insight that improves performance in the frame-masking a setting is that of using prototypes in general, already introduced with DreamerPro. SelfDreamer seems then a complex technique that delivers a small improvement over DreamerPro for all scenarios.
SelfDreamer: Dual-Prototypical Regularization for Frame-Masked Model-Based Reinforcement Learning Anonymous authors Paper under double-blind review Abstract In the realm of reinforcement learning (RL), the conventional approach involves training agents in unknown environments using extensive experiences comprising high-dimensional state representations (typically images), actions, and rewards. However, this standard setup imposes substantial data transmission overhead in scenarios where edge devices are employed for data collection, and cloud servers are utilized for model training. This paper introduces a novel paradigm termed "frame-masked RL," which is devised to enhance data efficiency while examining the impact on existing methods. Concurrently, we introduce a model-based algorithm, "SelfDreamer," tailored to mitigate the information loss incurred due to frame masking. SelfDreamer leverages action-transition dual prototypes to embed action information within the world model and align the hidden states in the representation space. Empirical evaluations reveal that SelfDreamer consistently outperforms state-of-the-art methods across six continuous control tasks sourced from the DeepMind Control Suite, demonstrating superior or comparable performance while utilizing only half of the observations from the environment. 1 Introduction Reinforcement learning (RL) is a foundational paradigm within machine learning, primarily dedicated to the training of autonomous agents for effective decision-making and continuous control tasks. In this context, RL agents typically operate in environments characterized by a degree of uncertainty, wherein they receive information regarding the current states and associated rewards iteratively. Recent advancements in RL have witnessed a predilection for representing state signals in the form of images. However, this prevalent approach may raise concerns, especially in cloud computing scenarios where training data collection and model learning occur at disparate endpoints. Notably, when gathering trajectories from resource-constrained edge devices, such as drones or robotic arms, and training a consolidated policy through a central server, the storage and transmission overhead incurred by image-based state representations become substantial. Consequently, the exigency to address the challenges posed by image-based reinforcement learning under conditions of sparse state signals, wherein the performance must be preserved despite concealing a portion of the training data, becomes evident (Fig. 1a). Model-based reinforcement learning (MBRL), as originally conceived by Sutton (1991), emerges as a promising candidate to address the aforementioned challenge. MBRL agents, although devoid of direct interaction with the physical environment, acquire knowledge from a latent world model that simulates real-world dynamics, thereby enhancing data efficiency. In the aforementioned context, the viability of the world model hinges on its capacity to glean meaningful insights from limited state images, while simultaneously having unrestricted access to lightweight scalar representations of actions and rewards. If such a world model can be successfully acquired and accurately emulates the true environment, it offers an enticing solution to the edge-cloud co-design predicament. Within the domain of image-based MBRL, the Dreamer framework, as introduced by Hafner et al. (2020), stands out for its commendable performance. Dreamer represents a significant milestone as the first MBRL agent to outperform established model-free RL agents, as delineated by Barth-Maron et al. (2018); Hessel et al. (2018), demonstrating superior sample efficiency in both discrete Figure 1: (a) Illustration of the system configuration featuring a central server and multiple edge devices, where the RL model is trained using data collected from resource-constrained edge devices by a resource-abundant server. To mitigate power and storage overhead on small-scale devices, the concept of frame masking is considered, wherein only observations at select timesteps are stored and transmitted, while lower-dimensional action and reward vectors or scalars are retained. (b) Depiction of the training process within Dreamer, a state-of-the-art MBRL framework, incorporating frame masking. The world model is trained through three objectives: reward prediction, transition prediction, and observation-related tasks (represented by blue, orange, and green arrows). In this study, we pad the masked frames with the previous valid frames, e.g., $o_2$ and $o_3$ are padded by $o_1$, to investigate the potential degradation of existing methods and strategies to restore their performance. and continuous control domains. Dreamer’s world model operates by encoding high-dimensional visual data into a compact latent space, thereby enhancing computational efficiency. This low-dimensional state space, furnished by the world model, facilitates policy training through gradient-based algorithms integrated into a differentiable architecture. The role of image state signals in the context of MBRL is pivotal, as direct reward maximization often falters due to the inherent sparsity and noise of rewards. Dreamer addresses this challenge by employing a reconstruction loss on sequences of visual observations, effectively framing it as an auxiliary task that bridges the gap between the model and the real-world environment. Concurrently, other studies, such as those by Nguyen et al. (2021); Deng et al. (2022), have proposed alternative strategies to bolster the robustness of the latent space, eschewing the reliance on reconstruction and achieving enhanced performance in regular continuous control tasks. Despite the commendable achievements of contemporary MBRL, a notable challenge persists concerning the acquisition of a reliable world model from partial training data, which stems from the reliance on visual observations. Notably, the simulated world model may exhibit distributional disparities compared to the actual environment, leading to unforeseen performance deviations in downstream policies. Even attempts to mitigate this issue, such as padding missing frames with the latest valid frames (Fig. 1b), often result in a flattened latent space, undermining subsequent reward prediction and policy learning. This paper delves into the intricacies of MBRL when confronted with incomplete state signals, specifically in the form of visual observations. Additionally, it explores potential solutions to ameliorate the performance degradation attributed to information sparsity. Drawing inspiration from prototypical learning in computer vision (Snell et al., 2017) and prior work in MBRL (Deng et al., 2022), we propose an innovative algorithm named “SelfDreamer.” This algorithm capitalizes on the concept of action-transition dual-prototypical learning, introducing a self-supervised regularization mechanism that enforces consistent transitions for similar actions. This regularization aids in conferring consistency and alignment to the latent space, particularly in the context of image-missing states. Subsequently, we evaluate the efficacy of the proposed algorithm using the standard DeepMind Control Suite, applying frame masking to a subset of images from the environments. The empirical results demonstrate that SelfDreamer consistently outperforms three state-of-the-art MBRL methods across six continuous control tasks, achieving higher final returns and, notably, superior or equivalent performances while utilizing only half of the state images. The contributions of this work can be succinctly summarized as follows: - This study represents a pioneering exploration of sparse state signals in reinforcement learning, frame-masked RL, showcasing a significant enhancement in data efficiency. The results shed light on a new line of research. - In addition to outlining this novel research direction, we introduce SelfDreamer, which incorporates a dual-prototypical mechanism featuring action-consistent transitions to embed action information into the MBRL world model and reform the representation space. - Extensive experimental evaluations affirm the empirical effectiveness of SelfDreamer, as it consistently outperforms state-of-the-art RL methods under standard settings and delivers superior or comparable policies while achieving double data efficiency in frame-masked RL scenarios. 2 PRELIMINARIES 2.1 FRAME-MASKED REINFORCEMENT LEARNING **Algorithm 1:** Edge-cloud co-design with frame-masking Initialize an empty cloud dataset \( D = \{\} \). Initialize policy parameters \( \phi \). while not converged do /* Experience Collecting by the Edge Devices */ Receive \( \phi \) from the server. \( o_1 = \text{env.reset()} \) for timestep \( t = 1..T \) do // Mask frames periodically. if \( t \mod P \neq 1 \) then \( o_t \leftarrow o_{t-1} \) \( a_t \sim \pi_\phi(a_t|o_t) \) \( r_t, o_{t+1} \leftarrow \text{env.step}(a_t) \) Transmit experiences to the global dataset \( D \leftarrow D \cup \{(o_t, a_t, r_t)\}_{t=1}^T \). /* Model Training by the Cloud Server */ for update step \( c = 1..C \) do Sample \( B \) training sequences \( \{(o_t, a_t, r_t)\}_{t=k}^{k+L} \sim D \). Update \( \phi \) by arbitrary RL algorithm. **Notations** - \( o_t \): observation at time step \( t \) - \( a_t \): action at time step \( t \) - \( r_t \): reward at time step \( t \) - \( \pi_\phi(a_t|o_t) \): policy with parameters \( \phi \) **Hyperparameters** - \( T \): length of interaction - \( P \): frame-masking period - \( C \): collect interval - \( B \): batch size - \( L \): sequence length **Frame-masking** For data compression, masked frames are dismissed during experience transmission and are recovered by repeating the previous unmasked frames at the server end. Reinforcement learning is conventionally formulated within the framework of Markov Decision Processes (MDP; Sutton (1991)), characterized by a state space \( S \), an action space \( A \), a reward function \( R \), and a transition function \( T \), sometimes accompanied by a discount factor \( \gamma \). The fundamental objective of an MDP agent is to maximize the cumulative reward by engaging with an unknown environment, where each interaction at time step \( t \) is typically represented as tuples \((s_t, a_t, r_t, s_{t+1})\). These tuples signify that (1) the agent receives the current state from the environment and responds with a valid action, and (2) the environment executes the specified action, returns the associated reward, and provides the subsequent state. However, in the realm of most RL research, direct access to the internal states of the environment is typically unavailable, with only observations being accessible, often manifesting as visual perceptions (hence, we denote \( o_t \) instead of \( s_t \) as the input to the policy in this paper). The extensive trial-and-error nature of RL training processes can render high-dimensional state signals impractical for cloud computing frameworks reliant on data collected from resource-constrained edge devices (Dai et al., 2022). For instance, in standard benchmarks like the DeepMind Control Suite (Tassa et al., 2018) and the Atari Benchmark (Bellemare et al., 2013), agents are required to process large numbers of 64x64 and 210x160 color images, amounting to 500K and 50M observations, respectively. In response to these challenges, we introduce a novel paradigm termed "frame-masked RL," designed to reduce the demand for environment frames during both training and testing, compared to the conventional RL setting. As outlined in Algorithm 1, edge devices are deployed to interact with the environment and collect experiences \(\{(o_t, a_t, r_t)\}_{t=1}^T\). However, observations are sampled only at intervals of \(P\) time steps (referred to as the "frame-masking period"), with the masked frames padded using the most recent valid observations. This approach maintains the integrity of the entire trajectory by interspersing genuine frames as anchors. Consequently, the agent operates under conditions of sparse state signals, leading to a reduction in data storage and transmission overhead by a factor of \(P\) (with the size of action and reward signals being relatively negligible). Importantly, rewards are still computed within the original state space, i.e., the true observation is discarded after the reward function calculation, preserving valuable information. Simultaneously, a cloud server is employed to receive trajectories from the edge device, and these can be efficiently compressed and reconstructed due to the periodic repetition of observations. Subsequently, the desired model can be trained within the server using any RL algorithm of choice and then returned to the edge device for further iterations. It is worth noting that frame skipping (Mnih et al., 2013) also involves disregarding responses from the environment, but it accomplishes this by repeating actions to capture sufficient dynamics, especially in high-frequency games. ### 2.2 MODEL-BASED REINFORCEMENT LEARNING In the context of this paper, our primary focus lies in model-based reinforcement learning with frame masking. As such, we first introduce the state-of-the-art framework, Dreamer (Hafner et al., 2020). While we will briefly touch on the training of the policy, it is essential to note that this aspect is not the central emphasis of our work. **World-model learning.** The central objective of Dreamer is to create a compact world model that encapsulates the transition and reward structures. This world model takes the form of a recurrent state-space model (RSSM; Hafner et al. (2019)). \[ \begin{align*} \text{Recurrent model:} & \quad h_t = f_\theta(h_{t-1}, z_{t-1}, a_{t-1}) \\ \text{Representation model:} & \quad z_t \sim p_\theta(z_t | h_t, o_t) \\ \text{Transition predictor:} & \quad \hat{z}_t \sim q_\theta(\hat{z}_t | h_t) \\ \text{Reward predictor:} & \quad \hat{r}_t \sim q_\theta(\hat{r}_t | h_t, z_t) \\ \text{Observation predictor:} & \quad \hat{o}_t \sim q_\theta(\hat{o}_t | h_t, z_t) \end{align*} \] For a comprehensive illustration, refer to Fig. 1b, which depicts the following key components: (1) The recurrent model (red arrows), which comprises a GRU (Cho et al., 2014) responsible for encoding previous actions and observations into the deterministic latent variable \(h_t\). (2) The representation model and the transition predictor (yellow arrows), which introduce the stochastic stream \(z_t\) through variational encoding. (3) The reward predictor (blue arrows), which facilitates downstream policy learning by emulating the reward function. (4) The observation predictor (green arrows), which contributes to model coherence by reconstructing observations from the model states. These components are jointly trained by maximizing the evidence lower bound (ELBO; Jordan et al. (1999)). \[ \sum_{t=1}^{T} \mathbb{E}_q \left[ -D_{KL}(p(z_t | h_t, o_t) || q(\hat{z}_t | h_t)) + \log q(r_t | h_t, z_t) + \log q(o_t | h_t, z_t) \right] \] A descendant of Dreamer, known as DreamerPro (Deng et al., 2022), serves as the basis for our method. DreamerPro augments Dreamer by replacing the reconstruction loss \(J_O^0\) with two cluster assignment tasks. This modification enhances the robustness of the RSSM by aligning the model’s capacity with the underlying nature of states, rather than attempting to fit noisy observations. Policy learning by world simulation. Dreamer operates by alternating between the training of the world model and policy. Both an actor and a critic, consisting of MLPs with ELU activations, are employed to learn from the latent trajectories generated by the world model. The simulation process commences at each non-terminal state \( s_t = [h_t, z_t] \) encountered during world model learning. At each step of imagination, an action \( a_\tau \) is sampled from the actor’s stochastic policy. Predicted rewards \( r_\tau^* \) and subsequent states \( s_{\tau+1} \) are generated based on the learned world model. Utilizing these simulated trajectories, the actor refines its policy using biased but low-variance straight-through gradients (Kingma & Welling, 2013) and explores by regularizing the output entropy. Simultaneously, the critic is trained to approximate the \( \lambda \)-return (Schulman et al., 2016) using a squared loss. 3 SELF DREAMER ![Figure 2](image) Figure 2: Illustration of SelfDreamer. (a) SelfDreamer leverages action and transition coupling to learn pairs of prototypes, comprising an action prototype \( c^A \) and a transition prototype \( c^T \). (b) The action prototypes are learned by assigning data pairs to the nearest prototype pairs based on action similarity, followed by minimizing within-cluster distance and maximizing between-cluster distance. (c) The transition prototypes are learned from data points, capturing common ground transitions, and are further propagated to refine the dynamics. 3.1 MOTIVATION The quality of the policy in MBRL significantly hinges on the accuracy of simulations generated by the world model. This model constructs a compact latent representation primarily through observation-related tasks. In the context of frame-masked reinforcement learning, which offers certain benefits, there arises a potential drawback - the risk of flattening the latent space due to padded observations. Specifically, frame masking may lead to the abandonment of some state signals from the environment, disrupting the coherence of state sequences. Additionally, frame padding could further compound this issue by mapping consecutive states to a single one. This phenomenon bears resemblance to mode collapse in Generative Adversarial Networks (GANs; Srivastava et al. (2017)) and over-smoothing in Graph Convolutional Networks (GCNs; Chen et al. (2020)). To rejuvenate and rectify the compromised latent representation space, we propose the incorporation of the causal relationship between actions and transitions, which we refer to as “action-consistent transitions” (ACT). ACT aims to preserve the consistency of state transitions induced by similar actions. Unlike isolating the action space from the state space (Chandak et al., 2019), our central idea is to implicitly embed action information into the state representation. We hypothesize that this strategy benefits the frame-masked world model for two primary reasons: (1) From a local perspective, action sequences serve as a means to distinguish frame-padded states from their counterparts, acting as a regularization that discourages the model from overfitting to the padded observations. (2) From a global standpoint, state representation might experience occasional mismatches in the absence of full access to observations. However, with the aid of action-transition correlation, state consistency and robustness are maintained even in the absence of direct supervision. One potential concern with this proposal is its applicability, particularly when enforcing a single transition for various states after taking identical actions. For instance, in a continuous control task, the transition of a humanoid character may differ when it is on the ground compared to when it is in the air, despite employing the same action vector. To address this concern and ensure the generalizability of our method, we introduce a novel algorithm named SelfDreamer. SelfDreamer leverages prototypical learning and simultaneously learns two types of prototypes for actions and transitions. These prototypes are coupled into pairs, as depicted in Fig. 2a, to enforce action-transition relationships and to identify common-ground transitions for similar actions. While these dual prototypes are intertwined, we introduce two novel objective functions into Eqn. 2, \( J_A \) and \( J_T \), which will be elaborated upon in the subsequent sections. ### 3.2 Action-Prototype Learning For prototypical learning, during each model learning iteration, we randomly sample an experience sequence of length \( L \) from the replay buffer: \( \{(o_t, a_t, o_{t+1})\}_{t=1}^{L} \sim D \) (where reward signals are disregarded as this method is self-supervised). This process can be scaled up to form batches of sequences, contributing to a more robust distribution estimation. To extend the application of the previously outlined mechanism from discrete action spaces to continuous ones, we initiate the procedure by randomly initializing \( k \) action prototypes. Each action prototype, denoted as \( c_i^A \), is an \( n \)-dimensional continuous vector, contingent on the action space: \( c_i^A \in \mathbb{R}^n, 1 \leq i \leq k \). Drawing inspiration from the k-means clustering algorithm (MacQueen, 1967), the initial step in action-prototype learning involves clustering action data points into distinct sets, \( S_i^A \), by assigning them to the nearest action prototypes (as defined in Eqn. 3). We employ the cosine distance metric (denoted as \( D_C \)) for measuring the distance between actions, which considers only the orientation of actions. For a more comprehensive discussion on this clustering method, please refer to Section A.2. \[ S_i^A = \{ a_p : D_C(a_p, c_i^A) \leq D_C(a_p, c_j^A) \forall j, 1 \leq j \leq k \} \forall i, 1 \leq i \leq k \] (3) Following the assignment step mentioned above, the objective function \( J_A \) for action-prototype learning is formulated as described in Eqn. 4. The first term in this equation aggregates the distribution of actions assigned to a specific action prototype. As depicted in the left portion of Fig. 2b, action prototypes are generated by minimizing the within-cluster cosine distances. However, to construct meaningful action prototypes, it is essential for each cluster of actions to maintain clear boundaries between one another. Consequently, the second term in the equation aims to maximize the between-cluster distance, ensuring that each cluster represents a unique domain of actions, as illustrated on the right side of Fig. 2b. Given that determining the appropriate number of action groups \( k \) can be challenging even with domain knowledge, this min-max game design proves valuable. It allows for the use of a larger number of prototypes initially, and redundant ones can be subsequently dismissed in an autoregressive manner by moving them further away from those on active duty. \[ J_A = \sum_{i=1}^{k} \sum_{a \in S_i^A} -D_C(c_i^A, a) + \sum_{i=1}^{k} \sum_{j=i+1}^{k} D_C(c_i^A, c_j^A) \] (4) ### 3.3 Transition-Prototype Learning Before initiating the learning process for transition prototypes, we first input the sampled experience sequence into the current world model for RSSM state inference, as denoted by \( \{s_t, a_t, s_{t+1}\}_{t=1}^{L} \forall s_t = [h_t, z_t] \). Subsequently, we define transitions \( t_t \) as the residuals between adjacent deterministic states, specifically \( h_{t+1} - h_t \), excluding the stochastic states to ensure a variance-free representation. Lastly, since we couple actions and transitions for both the prototypes and the data points, we can then partition transitions into \( k \) transition sets \( S_i^T \) corresponding to the transition prototypes initialized from Gaussian distribution: \( c_i^T \in \mathbb{R}^m, \forall 1 \leq i \leq k \), where \( m \) represents the dimension of the deterministic states. \[ S_i^T = \{ t_p : a_p \in S_i^A \} \forall i, 1 \leq i \leq k \] (5) As visualized in Fig. 2c, a mutual information exchange occurs between the prototypes and the transition data points, where a transition prototype learns from the transitions and subsequently propagates the integrated transition information back into the system. This interaction is formalized through the objective function $J_T$ (Eqn. 6), which employs gradient stopping ($sg$) to prevent backpropagation and fix certain models or prototypes. The first term of this objective function trains each transition prototype to fit all transition data points, weighted by the cosine similarity of their associated actions. This results in the learning of a shared latent encoding for similar actions, even when these actions are applied within different states. Furthermore, the weightings range from $-1$ to $1$, so negative cosine similarity encourages contrastive learning among transition prototypes, preventing the representation from collapsing. The latter term of the objective function is the key to this algorithm, guiding the world model by minimizing the cosine distance between each transition and its corresponding transition prototype. Since transitions are derived from the feedforward process of the world model, gradients can flow through the sequence of inferred model states, thereby enforcing the action-transition regularization discussed in Section 3.1. $$J_T = \sum_{i=1}^{k} \sum_{a,t} -sg(S_C(c_i^A, a)) * D_C(c_i^T, sg(t)) + \sum_{i=1}^{k} \sum_{t \in S_i^T} -D_C(sg(c_i^T), t)$$ (6) 4 EXPERIMENTS Environment Setup. In our evaluation, we concentrate primarily on image-based RL, and as a result, we assess the performance of our method and the baseline algorithms within the context of the DeepMind Control Suite (DMC; Tassa et al. (2018)). This suite encompasses a diverse array of continuous control tasks, and we have selected six of these tasks for our evaluation, consistent with the settings employed in prior works (Deng et al., 2022). The selected tasks include Cartpole Swingup Sparse, Cheetah Run, Cup Catch, Finger Spin, Reacher Easy, and Walker Run. To ensure a comprehensive assessment, we consider three distinct environment setups: (1) Standard DMC: In this setup, we adhere to the default configuration of the DMC, which serves as our baseline environment. (2) Frame-masked DMC: This setup aligns with the frame-masking paradigm described in Algorithm 1 of our paper. Within this configuration, observations in the replay buffer are subjected to frame masking, where observations are masked and padded at a frame-masking period denoted as $P$. In our experiments, we employ frame-masking periods of 2 and 3 to simulate the data-efficient framework proposed in our paper. (3) Natural background DMC: In line with the approach introduced by Nguyen et al. (2021), we introduce a setup where the background in the DMC environment is replaced with random natural videos. In this configuration, the task Cartpole Swingup Sparse is substituted with Cartpole Swingup. Further details on this setup can be found in the original paper. Baselines. In this study, we focus on MBRL, and our method is compared with several state-of-the-art MBRL frameworks. Specifically, we include Dreamer (Hafner et al., 2021), a foundational algorithm that has paved the way for subsequent research (Chen et al., 2022; Seo et al., 2023; Wu et al., 2023). Additionally, we evaluate our method against two reconstruction-free variations: TPC (Nguyen et al., 2021), which leverages contrastive predictive coding (Oord et al., 2018), and DreamerPro (Deng et al., 2022), a strong baseline that has demonstrated superior or comparable performance to Dreamer, TPC, and Dreaming (Okada & Taniguchi, 2021), another MBRL baseline. Evaluation Protocol. In accordance with the evaluation protocol established by Deng et al. (2022), our evaluation procedure adheres to the following guidelines: For each of the selected tasks, every model undergoes training for a duration equivalent to $1M$ environment steps. This corresponds to $500K$ actor steps, as the action repeat is configured to two. To assess the performance, the evaluation return is computed at intervals of $10K$ training steps, and the results are averaged over ten episodes for each evaluation point. 4.1 PERFORMANCE IN STANDARD DMC In our initial set of experiments, we sought to assess the generality and performance of SelfDreamer by comparing it to the baseline methods within the standard DMC. The results of this comparison are Table 1: Final performance in standard DMC. | Task | Dreamer | TPC | DreamerPro | SelfDreamer | |-----------------------|-----------|----------|------------|-------------| | Cartpole Swingup Sparse | 810 ± 39 | 811 ± 20 | 792 ± 29 | **837 ± 5** | | Cheetah Run | 755 ± 208 | 713 ± 37 | 892 ± 11 | **901 ± 11**| | Cup Catch | 679 ± 410 | 926 ± 26 | 957 ± 7 | **958 ± 3** | | Finger Spin | 553 ± 305 | 663 ± 218| 527 ± 78 | **744 ± 138**| | Reacher Easy | 849 ± 75 | 487 ± 112| 930 ± 30 | **962 ± 12**| | Walker Run | 649 ± 162 | 175 ± 16 | 620 ± 76 | **778 ± 18**| The results reveal that SelfDreamer consistently outperforms all three baseline methods across the evaluated tasks. Specifically, when compared to Dreamer, TPC, and DreamerPro, SelfDreamer exhibits performance improvements of 22%, 75%, and 12%, respectively, on average across the tasks. Additionally, it is noteworthy that SelfDreamer demonstrates relatively minor standard deviations in performance, except for Finger Spin, which exhibits relatively higher instability across all methods. A particular highlight is the performance on Walker Run, where SelfDreamer achieves a remarkable 25% improvement compared to DreamerPro. These findings underscore the effectiveness of the proposed heuristic employed by SelfDreamer, namely, action-consistent transitions, in enhancing the performance of model-based reinforcement learning for continuous control tasks. The results indicate that SelfDreamer holds promise as a robust and competitive approach within the standard DMC setting. 4.2 PERFORMANCE IN FRAME-MASKED DMC Table 2: Final performance in frame-masked DMC (2x less state signals). | Task | Dreamer | TPC | DreamerPro | SelfDreamer | |-----------------------|-----------|----------|------------|-------------| | Cartpole Swingup Sparse | 831 ± 14 | 831 ± 7 | 788 ± 30 | **837 ± 2** | | Cheetah Run | 875 ± 18 | 787 ± 66 | 795 ± 111 | **881 ± 16**| | Cup Catch | 724 ± 322 | 950 ± 10 | 959 ± 12 | **960 ± 4** | | Finger Spin | 646 ± 199 | 939 ± 21 | 973 ± 7 | **977 ± 2** | | Reacher Easy | 845 ± 78 | 331 ± 65 | 969 ± 7 | **969 ± 2** | | Walker Run | 264 ± 94 | 132 ± 49 | 616 ± 73 | **698 ± 17**| Table 3: Final performance in frame-masked DMC (3x less state signals). | Task | Dreamer | TPC | DreamerPro | SelfDreamer | |-----------------------|-----------|----------|------------|-------------| | Cartpole Swingup Sparse | 818 ± 29 | 742 ± 74 | 767 ± 24 | **822 ± 19**| | Cheetah Run | 806 ± 103 | 717 ± 94 | 803 ± 54 | **858 ± 4** | | Cup Catch | 946 ± 11 | 939 ± 5 | 954 ± 6 | **954 ± 4** | | Finger Spin | 771 ± 189 | 584 ± 39 | 683 ± 190 | **813 ± 110**| | Reacher Easy | 815 ± 49 | 380 ± 24 | 942 ± 39 | **956 ± 13**| | Walker Run | 100 ± 54 | 87 ± 27 | 406 ± 9 | **471 ± 24**| In our second set of experiments, we delve into the realm of frame-masked reinforcement learning, as introduced in Section 2.1. We present the results obtained when employing a frame-masking period set to 2 (Table 2) and another set to 3 (Table 3) to investigate the impact on model performance when reducing visual information. Table 2 presents the results when the frame-masking period is set to 2. Notably, we observe that for Dreamer and TPC, there is a drop in performance by 3% and 7%, respectively, while DreamerPro demonstrates an improvement of 15%. This improvement extends to certain tasks, with Cartpole Swingup Sparse, Cheetah Run, Cup Catch, and Finger Spin experiencing performance gains of 2%, 4%, 3%, and 25%, respectively. These results underscore the potential of frame-masked reinforcement learning to achieve higher data efficiency while maintaining or even enhancing performance. Moreover, when SelfDreamer is applied, the final return is further improved by 18% compared to DreamerPro “without” frame masking, achieving double data efficiency and highlighting the benefits of the action-transition dual prototypes introduced in Section 3. Table 3 explores the impact of setting the frame-masking period to 3. In this scenario, Dreamer, TPC, and DreamerPro all experience diminished final returns due to the loss of visual information, with reductions of 4%, 13%, and 11%, respectively. This impact is particularly pronounced in the challenging Walker Run task, where the performance degradation is notable, especially when compared to the results from Table 2. Despite these challenges, SelfDreamer continues to outperform the best baseline, DreamerPro, by 8% on average across all tasks. This suggests that SelfDreamer effectively assigns a robust latent state in the world model, aiding downstream behavior learning, even in scenarios with reduced visual information. ### 4.3 Performance in Natural Background DMC | Task | Dreamer | TPC | DreamerPro | SelfDreamer | |---------------|-----------|----------|------------|-------------| | Cartpole Swingup | 123 ± 26 | 567 ± 60 | 636 ± 95 | 731 ± 51 | | Cheetah Run | 26 ± 8 | 349 ± 53 | 356 ± 15 | 404 ± 21 | | Cup Catch | 57 ± 51 | 536 ± 93 | 555 ± 91 | 661 ± 29 | | Finger Spin | 2 ± 2 | 309 ± 24 | 801 ± 233 | 916 ± 38 | | Reacher Easy | 101 ± 47 | 705 ± 97 | 672 ± 168 | 701 ± 23 | | Walker Run | 39 ± 1 | 149 ± 11 | 383 ± 41 | 409 ± 11 | In our final set of experiments, we explore the performance of SelfDreamer in the context of natural background DMC, where nuisance and task-irrelevant information are introduced to distract the learning process. Table 4 presents the results of these experiments, with a focus on the comparison between SelfDreamer and DreamerPro, which serves as the foundation for our method. The results indicate that SelfDreamer exhibits a 12% performance improvement on average across all tasks when compared to DreamerPro. Additionally, SelfDreamer demonstrates more stable final performance across the tasks. These findings suggest that SelfDreamer is capable of generalizing to model-based reinforcement learning scenarios with noisy observations and distractions introduced by task-irrelevant information. Furthermore, the results imply that SelfDreamer effectively prioritizes the world model’s learning of task-relevant information even in challenging and distracting environments, showcasing its robustness and adaptability. ### 5 Conclusion and Future Direction In this paper, we introduce a novel reinforcement learning framework called "frame-masked RL," which effectively learns from sparse state signals, thereby achieving higher data efficiency. Furthermore, we have presented "SelfDreamer," a model-based algorithm that leverages prototypical learning and action-transition dual prototypes to mitigate representation flattening issues in the frame-masked world model. Our empirical results, based on continuous control tasks within the DeepMind Control Suite, demonstrate that SelfDreamer consistently outperforms three state-of-the-art methods across frame-masked DMC and other experimental settings, highlighting its versatility and effectiveness in model-based reinforcement learning. As our current focus primarily centers on continuous control tasks and model-based RL methods, future research avenues include extending the application of frame masking and investigating the heuristic of action-consistent transitions for (1) tasks involving discrete actions and a diverse array of states, such as those found in the Atari benchmark (Bellemare et al., 2013), and (2) model-free RL methods (Yarats et al., 2022). Additionally, addressing distribution mismatch concerns is an important consideration. In this work, the testing policy is constrained to utilize frame-masked sequences of states. Future research could explore methods to enable the evaluation process to leverage full observations, thereby paving the way for further advancements in this direction. Our work serves as a foundational stepping stone for these prospective research endeavors. REFERENCES Gabriel Barth-Maron, Matthew W Hoffman, David Budden, Will Dabney, Dan Horgan, Dhruva Tb, Alistair Muldal, Nicolas Heess, and Timothy Lillicrap. Distributed distributional deterministic policy gradients. *arXiv preprint arXiv:1804.08617*, 2018. Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. *Journal of Artificial Intelligence Research*, 47:253–279, 2013. Yash Chandak, Georgios Theocharous, James Kostas, Scott Jordan, and Philip Thomas. Learning action representations for reinforcement learning. In *International conference on machine learning*, pp. 941–950. PMLR, 2019. Chang Chen, Yi-Fu Wu, Jaesik Yoon, and Sungjin Ahn. Transdreamer: Reinforcement learning with transformer world models. *arXiv preprint arXiv:2202.09481*, 2022. Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In *Proceedings of the AAAI conference on artificial intelligence*, volume 34, pp. 3438–3445, 2020. Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. *arXiv preprint arXiv:1406.1078*, 2014. Hao Dai, Jiashu Wu, Yang Wang, and Chengzhong Xu. Towards scalable and efficient deep-rl in edge computing: A game-based partition approach. *Journal of Parallel and Distributed Computing*, 168:108–119, 2022. Daniel Defays. An efficient algorithm for a complete link method. *The Computer Journal*, 20(4):364–366, 1977. Fei Deng, Ingook Jang, and Sungjin Ahn. Dreamerpro: Reconstruction-free model-based reinforcement learning with prototypical representations. In *International Conference on Machine Learning*, pp. 4956–4975. PMLR, 2022. Martin Ester, Hans-Peter Kriegel, Jörg Sander, Xiaowei Xu, et al. A density-based algorithm for discovering clusters in large spatial databases with noise. In *kdd*, volume 96, pp. 226–231, 1996. Justin Fu, John Co-Reyes, and Sergey Levine. Ex2: Exploration with exemplar models for deep reinforcement learning. *Advances in neural information processing systems*, 30, 2017. Dibya Ghosh, Jad Rahme, Aviral Kumar, Amy Zhang, Ryan P Adams, and Sergey Levine. Why generalization in rl is difficult: Epistemic pomdps and implicit partial observability. *Advances in Neural Information Processing Systems*, 34:25502–25515, 2021. Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning latent dynamics for planning from pixels. In *International conference on machine learning*, pp. 2555–2565. PMLR, 2019. Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. Dream to control: Learning behaviors by latent imagination. *International Conference on Learning Representations*, 2020. Danijar Hafner, Timothy Lillicrap, Mohammad Norouzi, and Jimmy Ba. Mastering atari with discrete world models. *International Conference on Learning Representations*, 2021. Matteo Hessel, Joseph Modayil, Hado Van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. Rainbow: Combining improvements in deep reinforcement learning. In *Proceedings of the AAAI conference on artificial intelligence*, volume 32, 2018. Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An introduction to variational methods for graphical models. *Machine learning*, 37:183–233, 1999.
bJ3gFiwRgi
While Equation 12 (Appendix A.1) is the dual of Equation 11, if the domain is extended from linear constraint functions to non-linear constraint functions, the equation would no longer behave as the dual of the original problem as formulated in Equation 11, right? Does it make sense to use this as the lower level problem, in that case?
Meta Inverse Constrained Reinforcement Learning: Convergence Guarantee and Generalization Analysis Shicheng Liu & Minghui Zhu Department of Electrical Engineering Pennsylvania State University University Park, PA 16802, USA {sfl5539,muz16}@psu.edu Abstract This paper considers the problem of learning the reward function and constraints of an expert from few demonstrations. This problem can be considered as a meta-learning problem where we first learn meta-priors over reward functions and constraints from other distinct but related tasks and then adapt the learned meta-priors to new tasks from only few expert demonstrations. We formulate a bi-level optimization problem where the upper level aims to learn a meta-prior over reward functions and the lower level is to learn a meta-prior over constraints. We propose a novel algorithm to solve this problem and formally guarantee that the algorithm reaches the set of $\epsilon$-stationary points at the iteration complexity $O(\frac{1}{\epsilon^2})$. We also quantify the generalization error to an arbitrary new task. Experiments are used to validate that the learned meta-priors can adapt to new tasks with good performance from only few demonstrations. 1 Introduction Inverse reinforcement learning (IRL) has been receiving substantial research efforts due to its effectiveness to recover a reward function from expert’s demonstrations that can well explain the expert’s behavior. In practical applications, however, constraints are ubiquitous and a reward function combined with a set of constraints can better explain complicated behaviors than a single reward function (Malik et al., 2021). Therefore, inverse constrained reinforcement learning (ICRL) is proposed to learn constraints from expert’s demonstrations. Current state-of-the-arts on IRL (Fu et al., 2018; Imani & Ghoreishi, 2021) and ICRL (Scobee & Sastry, 2019) can either learn a reward function in unconstrained environments or infer constraints with access to the ground truth reward but cannot infer both. To solve this challenge, distributed ICRL (Liu & Zhu, 2022) is proposed to learn both the reward function and constraints of the expert. In this paper, we follow the definition of ICRL in (Liu & Zhu, 2022), which means learning both the reward function and constraints of the expert. While the aforementioned literature can recover the reward function and constraints for single tasks, they typically need large amounts of expert demonstrations (Yu et al., 2019). When it comes to multiple related tasks that share common structural patterns, e.g., navigating to different locations in a common environment (Xu et al., 2019), it could be expensive and inefficient to collect enough demonstrations for each task and then learn the corresponding reward function and constraints separately. Meta-learning (Rajeswaran et al., 2019) has a potential to learn the reward functions and constraints efficiently from few demonstrations. It can exploit the structural similarity of a group of related tasks by learning meta-priors. The learned meta-priors allow for rapid adaptation to new related tasks from only limited data. Therefore, it motivates us to leverage meta-learning to infer the reward functions and constraints of the experts in new tasks from only few demonstrations. Related works. IRL (Abbeel & Ng, 2004; Ziebart et al., 2008; Ziebart, 2010) and ICRL (Scobee & Sastry, 2019; Malik et al., 2021; Liu & Zhu, 2022) have shown great success in recovering the reward function and constraints from expert’s demonstrations. However, when it comes to multiple related tasks, they all require large amounts of demonstrations for each task. Meta-learning (Finn et al., 2017; Rajeswaran et al., 2019; Xu & Zhu, 2023b) provides a way to learn from limited data by learning the common structural patterns (i.e., meta-priors) of the related tasks and then optimizing for rapid adaptation to unseen tasks from only few data. It has achieved state-of-the-art performance in few-shot regression, classification (Finn et al., 2017), and reinforcement learning (Fallah et al., 2021a; Xu & Zhu, 2022). Recently, several meta IRL algorithms are proposed to recover reward functions from few demonstrations. In specific, (Yu et al., 2018; Xu et al., 2019) propose to learn a reward parameter initialization that can be adapted to new tasks via only one or few gradient descent step(s). (Yu et al., 2019; Seyed Ghasemipour et al., 2019) propose to learn a context-conditional model that, given a new task, can encode the task and output the corresponding reward parameters. However, the existing works on meta IRL have two limitations. (i) They do not explicitly deal with constraints. Existing meta-learning algorithms can directly compute the gradient of the meta objective (i.e., hyper-gradient) when only reward functions are learned (Xu et al., 2019), but cannot compute the hyper-gradient when we also need to deal with constraints. (ii) They do not theoretically guarantee the proposed algorithms’ convergence, and more importantly, adaptation performance (i.e., generalization error) to new tasks. This paper proposes the first theoretical framework and thereby an algorithm that can learn the reward function and constraints of a new task from only few demonstrations by first learning meta-priors over reward functions and constraints. While there is no meta IRL theoretical work, there are several theoretical works on other meta-learning problems. We discuss our distinctions from other related meta-learning theoretical works in Appendix A.14. **Contribution statement.** Our contributions are threefold. First, we extend ICRL (Liu & Zhu, 2022) to a meta-learning setting where we learn meta-priors over reward functions and constraints in order to adapt to new related tasks from few demonstrations. We formulate a novel bi-level optimization problem to solve it. Second, we propose a novel “meta inverse constrained reinforcement learning” (M-ICRL) algorithm, that can efficiently compute the hyper-gradient, to solve the problem. Third, we provide the iteration complexity $O(\frac{1}{\epsilon^2})$ of the algorithm reaching the set of $\epsilon$-stationary points. More importantly, we quantify the generalization error to an arbitrary new task. It is shown that the generalization error can be sufficiently small if the new task is “close” to the training tasks. ## 2 PROBLEM FORMULATION This section introduces the definition of a single task and then formulates the meta-learning problem. ### 2.1 SINGLE TASK: ICRL In our problem, a single task $T_i$ is an ICRL problem (Liu & Zhu, 2022) where a learner aims to learn the reward function and constraints of an expert from the expert’s demonstrated trajectories. The expert’s decision making is based on a constrained Markov decision process (CMDP). The task $T_i$’s CMDP $(S, A, \gamma, P_0, r_i, c_i, b_i)$ is defined via state set $S$, action set $A$, discount factor $\gamma$, and initial state distribution $P_0$. The probability of state transition to $s'$ from $s$ by taking action $a$ is $P(s'|s, a)$. The reward and cost functions of the expert are $r_i, c_i : S \times A \rightarrow \mathbb{R}$. A trajectory of the CMDP is a state-action sequence $\zeta = s_0, a_0, s_1, a_1, \cdots$ and we use $P_\pi$ to denote the trajectory distribution generated by an arbitrary policy $\pi$ where the initial state is drawn from $P_0$. Define $J_{r_i}(\pi) \triangleq E_{\zeta \sim P_\pi}[\sum_{t=0}^{\infty} \gamma^t r_i(s_t, a_t)]$ as the expected cumulative reward under the policy $\pi$ and $J_{c_i}(\pi) \triangleq E_{\zeta \sim P_\pi}[\sum_{t=0}^{\infty} \gamma^t c_i(s_t, a_t)]$ as the expected cumulative cost. The expert’s policy $\pi_i$ wants to maximize $J_{r_i}(\pi)$ subject to $J_{c_i}(\pi) \leq b_i$ where $b_i$ is a pre-defined budget. The expert can roll out $\pi_i$ to demonstrate a set of $D_i$ trajectories $D_i = \{\zeta^j\}_{j=1}^{D_i}$ where $\zeta^j = s^j_0, a^j_0, s^j_1, a^j_1, \cdots$. A learner observes $D_i$ and aims to use parameterized models $r_\theta$ and $c_\omega$ with parameters $\theta$ and $\omega$ to learn the expert’s reward function $r_i$ and cost function $c_i$ by solving the following ICRL problem: $$\min_{\theta} L_i(\theta, \omega^*(\theta)), \quad \text{s.t. } \omega^*(\theta) = \arg\min_{\omega} G_i(\omega; \theta).$$ The upper-level problem aims to learn a reward function $r_\theta$ that can minimize the expected negative log-likelihood $L_i(\theta, \omega) \triangleq -E_{\zeta \sim P_\pi}[\sum_{t=0}^{\infty} \gamma^t \log \pi_{\omega;\theta}(a_t|s_t)]$ where $\pi_{\omega;\theta}$ is the constrained soft Bellman policy (see the expression in Appendix A.2) (Liu & Zhu, 2022, 2024) under the reward function $r_\theta$ and cost function $c_\omega$. The constrained soft Bellman policy is an extension of soft Bellman policy (Ziebart et al., 2010; Zhou et al., 2017) to CMDPs. The soft Bellman policy is widely used in soft Q-learning (Haarnoja et al., 2017) and soft actor-critic (Haarnoja et al., 2018). The lower-level function \( G_i(\omega; \theta) \triangleq \max_{\pi} H(\pi) + J_{r_\theta}(\pi) - J_{c_\omega}(\pi_i) \) can be regarded as an RL problem which aims to find the policy that maximizes the entropy-regularized cumulative reward-minus-cost (i.e., \( H(\pi) + J_{r_\theta}(\pi) - J_{c_\omega}(\pi_i) \)) where \( H(\pi) \triangleq E_{\zeta \sim P_\pi} \left[ - \sum_{t=0}^{\infty} \gamma^t \log \pi(a_t | s_t) \right] \) is the causal entropy. Note that the likelihood \( L_i \) is defined on the expert’s trajectory distribution \( P_{\pi_i} \) while \( H(\pi) \) is defined on the trajectory distribution of its current policy \( \pi \). The last term \( J_{c_\omega}(\pi_i) \) in \( G_i \) is constant w.r.t. \( \pi \). It is proved (Liu & Zhu, 2022) that the constrained soft Bellman policy is the optimal policy of the RL problem in \( G_i(\omega; \theta) \), i.e., \( \pi_{\omega; \theta} = \arg \max_{\pi} H(\pi) + J_{r_\theta}(\pi) - J_{c_\omega}(\pi) \). The lower-level problem \( \min_{\omega} G_i(\omega; \theta) \) uses adversarial learning to find a cost function \( c_\omega \) that makes the best policy (i.e., \( \pi_{\omega; \theta} \)) perform the worst and the last term \( J_{c_\omega}(\pi_i) \) penalizes cost functions where the expert has high cumulative cost. We discuss the formulation of (1) in detail in Appendix A.1. Since problem (1) is defined in expectation but the learner only observes \( D_i \), in practice, the learner solves an empirical problem defined on \( D_i \). Given a trajectory \( \zeta^j = s^j_0, a^j_0, \ldots \), we define \( \hat{J}_c(\zeta^j) \triangleq \sum_{t=0}^{\infty} \gamma^t c(s^j_t, a^j_t) \) as the empirical cumulative cost. Then the empirical problem the learner solves is \[ \min_{\theta} \hat{L}_i(\theta, \hat{\omega}^*(\theta), D_i), \quad \text{s.t.} \quad \hat{\omega}^*(\theta) = \arg \min_{\omega} \hat{G}_i(\omega; \theta, D_i) \] where the \( \hat{L}_i(\theta, \omega; D_i) \triangleq - \frac{1}{|D_i|} \sum_{j=1}^{D_i} \sum_{t=0}^{\infty} \gamma^t \log \pi_{\omega; \theta}(a^j_t | s^j_t) \) and \( \hat{G}_i(\omega; \theta, D_i) \triangleq \max_{\pi} H(\pi) + J_{r_\theta}(\pi) - J_{c_\omega}(\pi) + \frac{1}{|D_i|} \sum_{j=1}^{D_i} \hat{J}_c(\zeta^j) \). ### 2.2 Multiple tasks: M-ICRL ICRL in (1) can successfully recover the reward and cost functions of the expert (Liu & Zhu, 2022, 2024). However, it typically needs a large data set for each task when it comes to multiple related tasks. To learn the reward and cost functions from few demonstrations, we leverage meta-learning which optimizes for the ability to learn efficiently on new tasks. It is typically assumed in meta-learning that there is a set of \( m \) training tasks \( \{T_i\}_{i=1}^m \) which share the CMDP. The difference of tasks is that each task \( T_i \) has its own reward function \( r_i \), cost function \( c_i \), and budget \( b_i \). The goal of meta-learning is to optimize for meta-priors of reward and cost functions over the \( m \) training tasks \( \{T_i\}_{i=1}^m \) such that the reward and cost functions adapted from the learned meta-priors have good performance on new tasks even if the new tasks only have limited data. When it comes to meta-learning, two of the state-of-the-arts are model agnostic meta-learning (MAML) (Finn et al., 2017) and meta-learning with implicit gradients (iMAML) (Rajeswaran et al., 2019). MAML is simple and widely implemented in RL (Fallah et al., 2021a) and IRL (Yu et al., 2019), while iMAML shows better empirical performance (Rajeswaran et al., 2019) at the expense of heavier computation in the lower level since MAML only needs one gradient descent but iMAML needs to fully solve an optimization problem in the lower level. In M-ICRL, we aim to propose a problem formulation that utilizes the advantages of both methods. The proposed problem formulation (2)-(3) has a bi-level structure (Ji et al., 2021; Xu & Zhu, 2023a) where we learn the reward meta-prior in the upper level and the cost meta-prior in the lower level. \[ \begin{align*} \min_{\theta, \omega} & \quad \frac{1}{m} \sum_{i=1}^{m} L_i(\varphi_i, \eta_i^*(\varphi_i, \omega)), \\ \text{s.t.} & \quad \eta_i^*(\varphi_i, \omega) = \arg \min_{\eta} G_i(\eta; \varphi_i) + \frac{\lambda}{2} ||\eta - \omega||^2, \end{align*} \] where \( \varphi_i \triangleq \theta - \alpha \frac{\partial}{\partial \theta} L_i(\theta, \eta_i^*(\theta, \omega)) \) is the task-specific reward adaptation and \( \eta_i^*(\varphi_i, \omega) \) is the task-specific cost adaptation. Note that problem (2)-(3) reduces to the ICRL problem (1) if we only consider one task and do not perform meta-learning on reward parameter \( \theta \) nor cost parameter \( \omega \), i.e., \( m = 1, \alpha = 0, \) and \( \lambda = 0 \). In this case, we do not have task-specific adaptations \( (\varphi_i, \eta_i^*) \). Problem (2)-(3) reduces to MAML if we only do meta-learning on the reward parameter \( \theta \) and do not perform meta-learning on the cost parameter \( \omega \), i.e., \( \lambda = 0 \). In this case, we only have the task-specific reward adaptation \( \varphi_i \). Problem (2)-(3) reduces to iMAML (explained in Appendix A.4) if we only do meta-learning on the cost parameter \( \omega \) and do not perform meta-learning on the reward parameter \( \theta \), i.e., \( \alpha = 0 \). In this case, we only have the task-specific cost adaptation \( \eta_i^* \). Problem (2)-(3) can reduce to the MAML that only learns \( \theta \) and the iMAML that only learns \( \omega \). It utilizes iMAML but does not suffer from the extra computation burden usually caused by iMAML because ICRL in (1) is already a bi-level formulation and we need to fully solve the lower-level problem anyway. We do not use iMAML for \( \theta \) because this will lead to a “three-level” problem. 3 THE PROPOSED ALGORITHM This section proposes a novel algorithm that solves problem (2)-(3). Following (Fallah et al., 2020), we partition the data set \( D_i \) of each training task \( T_i \) into three subsets \( D_{tr}^i \), \( D_{eval}^i \), and \( D_h^i \) with sizes \( |D_{tr}^i| \), \( |D_{eval}^i| \), and \( |D_h^i| \) respectively. The training set \( D_{tr}^i \) with limited data is used to compute the task-specific adaptations \( \varphi_i \) and \( \eta_i^* \), the evaluation set \( D_{eval}^i \) with abundant data is used to compute the hyper-gradients (i.e., the gradients of the upper-level loss function in (2) with respect to \( \theta \) and \( \omega \)), and the set \( D_h^i \) is used to compute the second-order terms in the hyper-gradients. For an arbitrary data set \( D \) with size \( D \), we solve the empirical version (i.e., \( \arg\min_\eta \hat{G}_i(\eta; \varphi_i, D) + \frac{\lambda}{2} ||\eta - \omega||^2 \)) of the lower-level problem (3) using \((K-1)\)-step gradient descent \( \hat{\eta}_i(\varphi_i, \omega, D, k+1) = \hat{\eta}_i(\varphi_i, \omega, D, k) - \tau [\nabla_\eta \hat{G}_i(\hat{\eta}_i(\varphi_i, \omega, D, k); \varphi_i, D) + \lambda (\hat{\eta}_i(\varphi_i, \omega, D, k) - \omega)] \) where \( \tau \) is the step size. We then use \( \hat{\eta}_i(\varphi_i, \omega, D, K) \) as an approximation of \( \eta_i^*(\varphi_i, \omega, D) \triangleq \arg\min_\eta \hat{G}_i(\eta; \varphi_i, D) + \frac{\lambda}{2} ||\eta - \omega||^2 \). We provide the expressions of all the gradients, including \( \nabla_\eta \hat{G}_i \), in Appendix A.3. **Algorithm 1** Meta inverse constrained reinforcement learning (M-ICRL) **Input:** Initialized reward meta-prior \( \theta(0) \) and cost meta-prior \( \omega(0) \), task batch size \( B \), step size \( \alpha \) **Output:** Learned meta-prior \( \theta(n) \) and cost meta-prior \( \omega(n) \) 1: for \( n = 0, 1, \ldots \) do 2: Samples a batch of training tasks \( \{T_i\}_{i=1}^B \) with size \( B \) 3: for all \( T_i \) do 4: Samples the demonstration set \( D_{tr}^i \) to compute \( \hat{\eta}_i(\theta(n), \omega(n), D_{tr}^i, K) \) and \( \hat{\varphi}_i(n) = \theta(n) - \alpha \frac{\partial}{\partial \theta} \hat{L}_i(\theta(n), \hat{\eta}_i(\theta(n), \omega(n), D_{tr}^i, K), D_{tr}^i) \) 5: Samples the demonstration sets \( D_{eval}^i \) and \( D_h^i \) 6: \( \nabla_{\theta,i}, \nabla_{\omega,i} = \text{Hyper-gradient}(\theta(n), \omega(n), \hat{\varphi}_i(n), D_{tr}^i, D_{eval}^i, D_h^i) \) 7: end for 8: \( \theta(n+1) = \theta(n) - \frac{\alpha(n)}{B} \sum_{i=1}^B \nabla_{\theta,i}, \quad \omega(n+1) = \omega(n) - \frac{\alpha(n)}{B} \sum_{i=1}^B \nabla_{\omega,i} \) 9: end for At each iteration \( n \) in Algorithm 1, the learner samples \( B \) tasks from the set of training tasks \( \{T_i\}_{i=1}^m \). For each sampled training task \( T_i \), the learner first uses the training set \( D_{tr}^i \) to compute \( \hat{\eta}_i \) and the task-specific reward adaptation \( \hat{\varphi}_i \) (line 4). Then the learner uses the training set \( D_{tr}^i \), evaluation set \( D_{eval}^i \), and \( D_h^i \) to compute the hyper-gradients \( \nabla_{\theta,i} \) and \( \nabla_{\omega,i} \) (line 6). Finally, the learners utilizes stochastic gradient descent to update the reward and cost meta-priors (line 8). The computation of the hyper-gradients is critical to Algorithm 1. In the following context, we first identify the difficulties of computing the hyper-gradients and then provide our solutions. ### 3.1 Challenges of Computing the Hyper-Gradients The hyper-gradients \( \frac{\partial L_i(\varphi_i, \eta_i^*(\varphi_i, \omega))}{\partial \theta} \) and \( \frac{\partial L_i(\varphi_i, \eta_i^*(\varphi_i, \omega))}{\partial \omega} \) of problem (2)-(3) are hard to compute. Take \( \frac{\partial L_i(\varphi_i, \eta_i^*(\varphi_i, \omega))}{\partial \theta} \) as an example (the derivation of the hyper-gradients is in Appendix A.5): \[ \frac{\partial L_i(\varphi_i, \eta_i^*(\varphi_i, \omega))}{\partial \theta} = \left[ I - \alpha \frac{\partial^2}{\partial \theta^2} L_i(\theta, \eta_i^*(\theta, \omega)) \right] \cdot \left[ \nabla_{\varphi_i} L_i(\varphi_i, \eta_i^*(\varphi_i, \omega)) \right] \] \[ - \nabla_{\varphi_i \eta} G_i(\eta_i^*(\varphi_i, \omega); \varphi_i) \nabla_{\eta \eta} G_i(\eta_i^*(\varphi_i, \omega); \varphi_i) + \lambda I \right]^{-1} \nabla_{\eta} L_i(\varphi_i, \eta_i^*(\varphi_i, \omega)) \] (i) The second-order term \( \frac{\partial^2}{\partial \theta^2} L_i(\theta, \eta_i^*(\theta, \omega)) \) in the first bracket is intractable to compute since it requires to compute \( \nabla_{\theta \theta} \eta_i^*(\theta, \omega) \) which needs to calculate the gradient of an inverse-of-Hessian term \( \nabla_{\eta \eta} G_i(\eta_i^*(\varphi_i, \omega); \varphi_i) + \lambda I \right]^{-1} \). (ii) The inverse-of-Hessian \( \nabla_{\eta \eta} G_i(\eta_i^*(\varphi_i, \omega); \varphi_i) + \lambda I \right]^{-1} \) in the second bracket is expensive to compute, especially when we use neural networks as parameterized models. (iii) We cannot get \( \eta_i^* \) but only its approximation since the optimization oracle is not guaranteed to find the exact optimal solution. This will cause errors when we compute the hyper-gradients. 3.2 Main idea to solve the challenges Solution to challenge (i) (Algorithm 2). The learner uses sampled data sets \( D_{\text{tr}}^i, D_{\text{eval}}^i, \) and \( D_h^i \) to approximate the hyper-gradients: \[ g_{\theta,i} \triangleq \left[ I - \alpha \frac{\partial^2}{\partial \theta^2} \hat{L}_i(\theta, \hat{\eta}_i^*(\theta, \omega, D_{\text{tr}}^i), D_h^i) \right] \Delta_{\theta,i}, \] \[ g_{\omega,i} \triangleq -\alpha \frac{\partial^2}{\partial \omega \partial \theta} \hat{L}_i(\theta, \hat{\eta}_i^*(\theta, \omega, D_{\text{tr}}^i), D_h^i) \Delta_{\theta,i} + \Delta_{\omega,i}, \] where \( \Delta_{\theta,i} \) and \( \Delta_{\omega,i} \) are partial gradients of \( \hat{L}_i(\hat{\varphi}_i, \hat{\eta}_i^*(\hat{\varphi}_i, \omega, D_{\text{eval}}^i), D_{\text{eval}}^i) \) with respect to \( \varphi_i \) and \( \omega \). While the second-order terms (i.e., \( \frac{\partial^2}{\partial \theta^2} \hat{L}_i \) and \( \frac{\partial^2}{\partial \omega \partial \theta} \hat{L}_i \)) in the hyper-gradients (4)-(5) are directly computed in many meta-learning works (Finn et al., 2017; Xu et al., 2019), in our case, it is prohibitively hard to compute. To compute the second-order terms, we need to calculate \( \nabla_\theta \hat{\eta}_i^*(\theta, \omega, D_{\text{tr}}^i) \) which needs to calculate the gradient of an inverse-of-Hessian term since \( \nabla_\theta \hat{\eta}_i^*(\theta, \omega, D_{\text{tr}}^i) = -\nabla_\theta^2 \hat{G}_i(\hat{\eta}_i^*(\theta, \omega, D_{\text{tr}}^i); \theta, D_{\text{tr}}^i)[\nabla_\theta \hat{G}_i(\hat{\eta}_i^*(\theta, \omega, D_{\text{tr}}^i); \theta, D_{\text{tr}}^i) + \lambda I]^{-1} \) and \( \nabla_\omega \hat{\eta}_i^*(\theta, \omega, D_{\text{tr}}^i) = \lambda [\nabla_\omega^2 \hat{G}_i(\hat{\eta}_i^*(\theta, \omega, D_{\text{tr}}^i); \theta, D_{\text{tr}}^i) + \lambda I]^{-1} \) (derived in Appendix A.5). To tackle this challenge, we use the first-order approximation to approximate the products: \[ \frac{\partial^2}{\partial \theta^2} \hat{L}_i(\theta, \hat{\eta}_i^*(\theta, \omega, D_{\text{tr}}^i), D_h^i) \Delta_{\theta,i} \approx \frac{1}{2\delta} \left[ \frac{\partial}{\partial \theta} \hat{L}_i(\theta + \delta \Delta_{\theta,i}, \hat{\eta}_i^*(\theta + \delta \Delta_{\theta,i}, \omega, D_{\text{tr}}^i), D_h^i) \right. \\ - \left. \frac{\partial}{\partial \theta} \hat{L}_i(\theta - \delta \Delta_{\theta,i}, \hat{\eta}_i^*(\theta - \delta \Delta_{\theta,i}, \omega, D_{\text{tr}}^i), D_h^i) \right], \] \[ \frac{\partial^2}{\partial \omega \partial \theta} \hat{L}_i(\theta, \hat{\eta}_i^*(\theta, \omega, D_{\text{tr}}^i), D_h^i) \Delta_{\theta,i} \approx \frac{1}{2\delta} \left[ \frac{\partial}{\partial \omega} \hat{L}_i(\theta + \delta \Delta_{\theta,i}, \hat{\eta}_i^*(\theta + \delta \Delta_{\theta,i}, \omega, D_{\text{tr}}^i), D_h^i) \right. \\ - \left. \frac{\partial}{\partial \omega} \hat{L}_i(\theta - \delta \Delta_{\theta,i}, \hat{\eta}_i^*(\theta - \delta \Delta_{\theta,i}, \omega, D_{\text{tr}}^i), D_h^i) \right], \] where \( \delta \) is perturbation magnitude. In Algorithm 2, the learner first approximates the partial gradients \( \Delta_{\theta,i} \) and \( \Delta_{\omega,i} \) (line 1 in Algorithm 2), and then computes the first-order approximation (lines 2-4 in Algorithm 2). The output of Algorithm 2 is the approximation of the hyper-gradients (4)-(5). Solution to challenge (ii) (Algorithm 3). The partial gradients of \( \hat{L}_i(\hat{\varphi}_i, \hat{\eta}_i^*(\hat{\varphi}_i, \omega, D_{\text{eval}}^i), D_{\text{eval}}^i) \) with respect to \( \varphi_i \) and \( \omega \) are respectively: \[ \Delta_{\theta,i} = \nabla_{\varphi_i} \hat{L}_i(\hat{\varphi}_i, \hat{\eta}_i^*(\hat{\varphi}_i, \omega, D_{\text{eval}}^i), D_{\text{eval}}^i) - \nabla_{\varphi_i}^2 \hat{G}_i(\hat{\eta}_i^*(\hat{\varphi}_i, \omega, D_{\text{eval}}^i); \hat{\varphi}_i, D_{\text{eval}}^i), \] \[ [\lambda I + \nabla_{\eta}^2 \hat{G}_i(\hat{\eta}_i^*(\hat{\varphi}_i, \omega, D_{\text{eval}}^i); \hat{\varphi}_i, D_{\text{eval}}^i)]^{-1} \nabla_{\eta} \hat{L}_i(\hat{\varphi}_i, \hat{\eta}_i^*(\hat{\varphi}_i, \omega, D_{\text{eval}}^i), D_{\text{eval}}^i), \] \[ \Delta_{\omega,i} = \lambda [\lambda I + \nabla_{\eta}^2 \hat{G}_i(\hat{\eta}_i^*(\hat{\varphi}_i, \omega, D_{\text{eval}}^i); \hat{\varphi}_i, D_{\text{eval}}^i)]^{-1} \nabla_{\eta} \hat{L}_i(\hat{\varphi}_i, \hat{\eta}_i^*(\hat{\varphi}_i, \omega, D_{\text{eval}}^i), D_{\text{eval}}^i). \] Note that the partial gradients (8)-(9) contain \( [\lambda I + \nabla_{\eta}^2 \hat{G}_i(\hat{\eta}_i^*(\hat{\varphi}_i, \omega, D_{\text{eval}}^i); \hat{\varphi}_i, D_{\text{eval}}^i)]^{-1} \nabla_{\eta} \hat{L}_i(\hat{\varphi}_i, \hat{\eta}_i^*(\hat{\varphi}_i, \omega, D_{\text{eval}}^i), D_{\text{eval}}^i) \) where the inverse-of-Hessian term is expensive to compute. Therefore, we solve the following optimization problem instead: \[ \min_x x^\top \left[ \lambda I + \nabla_{\eta}^2 \hat{G}_i(\hat{\eta}_i^*(\hat{\varphi}_i, \omega, D_{\text{eval}}^i); \hat{\varphi}_i, D_{\text{eval}}^i) \right] x - \left[ \nabla_{\eta} \hat{L}_i(\hat{\varphi}_i, \hat{\eta}_i^*(\hat{\varphi}_i, \omega, D_{\text{eval}}^i), D_{\text{eval}}^i) \right]^\top x. \] It is obvious that the optimal solution of problem (10) is \( [\lambda I + \nabla_{\eta}^2 \hat{G}_i(\hat{\eta}_i^*(\hat{\varphi}_i, \omega, D_{\text{eval}}^i); \hat{\varphi}_i, D_{\text{eval}}^i)]^{-1} \nabla_{\eta} \hat{L}_i(\hat{\varphi}_i, \hat{\eta}_i^*(\hat{\varphi}_i, \omega, D_{\text{eval}}^i), D_{\text{eval}}^i) \). In Algorithm 3, the learner solves the problem (10) for \( (\bar{K} - 1) \)-step gradient descent to get an approximation \( x(\bar{K}) \) of the optimal solution of problem (10) (line 3 in Algorithm 3) and then use \( x(\bar{K}) \) to help approximate the partial gradients (8)-(9) (lines 5-6 in Algorithm 3). Solution to challenge (iii). We cannot get \( \hat{\eta}_i^*(\theta, \omega, D_{\text{tr}}^i) \) but an approximation \( \hat{\eta}_i(\theta, \omega, D_{\text{tr}}^i, K) \). In practice, we use this approximation to substitute for \( \hat{\eta}_i^*(\theta, \omega, D_{\text{tr}}^i) \) in (4)-(5). Similarly, we use the approximation \( \hat{\eta}_i(\hat{\varphi}_i, \omega, D_{\text{eval}}^i, K) \) to substitute for \( \hat{\eta}_i^*(\hat{\varphi}_i, \omega, D_{\text{eval}}^i) \) in (8)-(9). To quantify the approximation error of the hyper-gradients (4)-(5) caused by \(||\hat{\eta}_i(\cdot,\cdot,K) - \hat{\eta}_i^*(\cdot,\cdot)||\), we exploit the Lipschitz continuity of the hyper-gradients with respect to \( \eta \). In specific, we first prove the Lipschitz continuity of the partial gradients (8)-(9) w.r.t. \( \eta \) in Appendix A.7 and then prove the Lipschitz continuity of the first-order approximation (6)-(7) w.r.t. \( \eta \) in Appendix A.8. Then, we can see the Lipschitz continuity of the hyper-gradients (4)-(5) w.r.t. \( \eta \). **Algorithm 2** Hyper-gradient\((\theta, \omega, \hat{\varphi}_i, D_{tr}^i, D_{val}^i, D_h^i)\) **Input:** Reward parameter \( \theta \), cost parameter \( \omega \), task-specific reward adaptation \( \hat{\varphi}_i \), training set \( D_{tr}^i \), evaluation set \( D_{val}^i \), the data set \( D_h^i \) to compute the second-order terms, perturbation \( \delta \) **Output:** Approximate hyper-gradients \( \hat{\Delta}_{\theta,i} = \alpha \nabla_{\theta,i}, \hat{\Delta}_{\omega,i} = \alpha \nabla_{\omega,i} \) 1: \( \hat{\Delta}_{\theta,i}, \hat{\Delta}_{\omega,i} = \text{Partial-gradient}(\hat{\varphi}_i, \omega, D_{val}^i, D_{tr}^i) \) 2: \( \hat{\Delta}_{\theta+i}, \hat{\Delta}_{\omega+i} = \text{Partial-gradient}(\theta + \delta \hat{\Delta}_{\theta,i}, \omega, D_{tr}^i, D_h^i) \) 3: \( \hat{\Delta}_{\theta-i}, \hat{\Delta}_{\omega-i} = \text{Partial-gradient}(\theta - \delta \hat{\Delta}_{\theta,i}, \omega, D_{tr}^i, D_h^i) \) 4: \( \nabla_{\theta,i} = (\hat{\Delta}_{\theta+i} - \hat{\Delta}_{\theta-i})/(2\delta), \quad \nabla_{\omega,i} = (\hat{\Delta}_{\omega+i} - \hat{\Delta}_{\omega-i})/(2\delta) \) **Algorithm 3** Partial-gradient\((\theta, \omega, D_1, D_2)\) **Input:** Reward parameter \( \theta \), cost parameter \( \omega \), data set \( D_1 \), data set \( D_2 \), step size \( \beta \) **Output:** Approximate partial gradients \( \hat{\Delta}_{\theta,i}, \hat{\Delta}_{\omega,i} \) 1: Compute \( \hat{\eta}_i(\theta, \omega, D_1, K) \) and initialize \( x(0) \) 2: for \( k = 0, 1, \cdots, K-1 \) do 3: \( x(k+1) = x(k) - \beta \left( [\lambda I + \nabla_{\theta\theta}^2 \hat{G}_i(\hat{\eta}_i(\theta, \omega, D_1, K); \theta, D_1)]x(k) - \nabla_{\theta}\hat{L}_i(\theta, \hat{\eta}_i(\theta, \omega, D_1, K), D_2) \right) \) 4: end for 5: \( \hat{\Delta}_{\theta,i} = \nabla_{\theta}\hat{L}_i(\theta, \hat{\eta}_i(\theta, \omega, D_1, K), D_2) - \nabla_{\theta\theta}^2 \hat{G}_i(\hat{\eta}_i(\theta, \omega, D_1, K); \theta, D_1)x(K) \) 6: \( \hat{\Delta}_{\omega,i} = \lambda x(K) \) ### 4 THEORETICAL ANALYSIS This section has two parts: the first part provides the convergence guarantee of Algorithm 1, and the second part quantifies the generalization error to an arbitrary new task. #### 4.1 CONVERGENCE GUARANTEE Compared to the standard stochastic gradient descent, the main difficulty of guaranteeing the convergence of Algorithm 1 lies in quantifying the approximation error of the hyper-gradients. The approximation error comes from three aspects which correspond to the three challenges in Subsection 3.1: (i) We cannot obtain the exact optimal solution \( \hat{\eta}_i^*(\cdot,\cdot) \) of the lower-level problem (3) but an approximation \( \hat{\eta}_i(\cdot,\cdot,K) \). (ii) We cannot compute the inverse-of-Hessian term \( [\lambda I + \nabla_{\theta\theta}^2 \hat{G}_i]^{-1} \) but use an iterative method to approximate the product \( [\lambda I + \nabla_{\theta\theta}^2 \hat{G}_i]^{-1} \nabla_{\theta}\hat{L}_i \) in Algorithm 3. This will result in the error between the approximate partial gradients \( \hat{\Delta}_{\theta,i}, \hat{\Delta}_{\omega,i} \) (i.e., the output of Algorithm 3) and the real partial gradients \( \Delta_{\theta,i}, \Delta_{\omega,i} \) in (8)-(9). (iii) We use the first-order approximation (6)-(7) in Algorithm 2 to approximate the real hyper-gradients (4)-(5). In what follows, we first sequentially quantify the three approximation errors identified in the last paragraph and then analyze the convergence of Algorithm 1. We start with the following assumption. **Assumption 1.** (i) The parameterized reward function \( r_\theta \) satisfies \( |r_\theta(s,a)| \leq C_r, ||\nabla_\theta r_\theta(s,a)|| \leq \bar{C}_r \), and \( ||\nabla_{\theta\theta}^2 r_\theta(s,a)|| \leq \tilde{C}_r \) for any \((s,a) \in S \times A\) and any \( \theta \) where \( C_r, \bar{C}_r, \tilde{C}_r \) are positive constants; (ii) The parameterized cost function \( c_\omega \) has similar properties with positive constants \( C_c, \bar{C}_c, \tilde{C}_c \); (iii) The third and fourth order gradients of the reward and cost functions with respect to their parameters are bounded for any \((s,a)\) and \((\theta, \omega)\). Note that Assumptions [1](i) and (ii) are standard in RL (Wang et al., 2019; Kumar et al., 2019; Zhang et al., 2020; Zheng et al., 2023) and IRL (Guan et al., 2021). Assumption [1](iii) is needed to exploit the Lipschitz continuity of the hyper-gradients. Moreover, the bounded third order gradients of the parameterized model in Assumption [1](iii) are commonly assumed in meta RL (Fallah et al., 2021a). Approximation error (i). Proved in Appendix A.6, the function $G_i$ and its empirical approximation $\hat{G}_i$ using any data set are $C_{\nabla^2_{\eta_i} G}$-smooth for any task $T_i$, where $C_{\nabla^2_{\eta_i} G}$ is a positive constant whose expression is in Appendix A.6. Therefore, the lower-level objective function in (3) becomes $(\lambda - C_{\nabla^2_{\eta_i} G})$-strongly convex and $(\lambda + C_{\nabla^2_{\eta_i} G})$-smooth if $\lambda > C_{\nabla^2_{\eta_i} G}$. Choosing $\tau = \frac{1}{K}$ and following the standard result for strongly-convex and smooth objective functions (Nesterov, 2003; Boyd & Vandenberghe, 2004), we have $\|\hat{\eta}_i(\cdot, \cdot, K) - \eta_i(\cdot, \cdot)\| \leq O((C_{\nabla^2_{\eta_i} G}/\lambda)^K)$. Approximation error (ii). We next quantify the approximation error of the partial-gradients. **Lemma 1.** Suppose Assumption [1] holds and let $\beta = \frac{1}{\lambda}$ where $\lambda > C_{\nabla^2_{\eta_i} G}$, then the outputs of Algorithm 3 satisfy: $$ \|\hat{\Delta}_{\theta,i} - \Delta_{\theta,i}\| \leq O\left(\left(\frac{C_{\nabla^2_{\eta_i} G}}{\lambda}\right)^K + \left(\frac{C_{\nabla^2_{\eta_i} G}}{\lambda}\right)^{\bar{K}}\right), $$ $$ \|\hat{\Delta}_{\omega,i} - \Delta_{\omega,i}\| \leq O\left(\left(\frac{C_{\nabla^2_{\eta_i} G}}{\lambda}\right)^K + \left(\frac{C_{\nabla^2_{\eta_i} G}}{\lambda}\right)^{\bar{K}}\right). $$ Lemma 1 shows that the approximation error of the partial gradients diminishes if we increase the iteration numbers of solving the lower-level problem and in Algorithm 3. Approximation error (iii). With the approximation error of the partial gradients, we can quantify the approximation error of the hyper-gradients. **Lemma 2.** Suppose the conditions in Lemma 1 hold, then the outputs of Algorithm 2 satisfy: $$ \|\hat{\Delta}_{\theta,i} - \alpha \nabla_{\theta,i} - g_{\theta,i}\| \leq O\left(\left(\frac{C_{\nabla^2_{\eta_i} G}}{\lambda}\right)^K + \left(\frac{C_{\nabla^2_{\eta_i} G}}{\lambda}\right)^{\bar{K}} + \delta\right), $$ $$ \|\hat{\Delta}_{\omega,i} - \alpha \nabla_{\omega,i} - g_{\omega,i}\| \leq O\left(\left(\frac{C_{\nabla^2_{\eta_i} G}}{\lambda}\right)^K + \left(\frac{C_{\nabla^2_{\eta_i} G}}{\lambda}\right)^{\bar{K}} + \delta\right). $$ Lemma 2 indicates that the approximation error of the hyper-gradients can be arbitrarily small if we solve the lower-level problem (3) for enough iterations, run Algorithm 3 for enough iterations, and choose sufficiently small $\delta$. To reason about the convergence of Algorithm 1, we introduce $\epsilon$-approximate first order stationary point ($\epsilon$-FOSP) (Fallah et al., 2020): the variable $(\theta, \omega)$ is $\epsilon$-FOSP if $\left\|\frac{1}{m} \sum_{i=1}^{m} \nabla L_i(\varphi_i(n), \eta_i^*(\varphi_i(n), \omega(n)))\right\| \leq \epsilon$ where $\nabla L_i(\varphi_i, \eta_i^*(\varphi_i, \omega)) = (\frac{\partial}{\partial \theta} L_i(\varphi_i, \eta_i^*(\varphi_i, \omega)))^\top, (\frac{\partial}{\partial \omega} L_i(\varphi_i, \eta_i^*(\varphi_i, \omega)))^\top$. **Theorem 1 (Convergence of Algorithm 1).** Suppose the conditions in Lemma 2 hold. Let $\alpha \in [0, \frac{1}{D_\theta}]$ and $\alpha(n) = \frac{\bar{\alpha}}{(n+1)^{\rho}}$ where $\bar{\alpha} \in (0, \frac{1}{C_f + 2}], \rho \in (\frac{1}{2}, 1)$, and $D_\theta$ and $C_f$ are positive constants whose existence is proved in Appendices A.8 and A.9 respectively. Then Algorithm 1 reaches the set of $\epsilon$-FOSP, i.e., $$ E\left[\left\|\frac{1}{m} \sum_{i=1}^{m} \nabla L_i(\varphi_i(n), \eta_i^*(\varphi_i(n), \omega(n)))\right\|\right] \leq \epsilon + O\left(\left(\frac{C_{\nabla^2_{\eta_i} G}}{\lambda}\right)^K + \left(\frac{C_{\nabla^2_{\eta_i} G}}{\lambda}\right)^{\bar{K}} + \delta + \frac{1}{\min_i \sqrt{D_i^m}}\right) $$ after at most $N = \min(C_1, C_2)$ iterations. The expressions of the positive constants $C_1$ and $C_2$ are in Appendix A.9. Theorem 1 shows that Algorithm 1 reaches the set of $\epsilon$-FOSP at the iteration complexity $O(\frac{1}{\epsilon^2})$. Moreover, to reduce $E\left[\left\|\frac{1}{m} \sum_{i=1}^{m} \nabla L_i\right\|\right]$ and reach the set of $\epsilon$-FOSP within fewer iterations in Algorithm 1 we have the following choices: (i) increase the iteration number $K$ of solving the lower-level problem (3) and the iteration number $K$ in Algorithm 3; (ii) choose smaller $\delta$ in the first-order approximations (6)-(7); (iii) sample larger size $B$ of training tasks at each iteration $n$ in Algorithm 1; (iv) choose larger size $D_i^t$ of training data of each training task $T_i$. 4.2 Generalization analysis The goal of meta-learning is to learn good meta-priors such that the reward and cost functions adapted from the learned meta-priors can have good performance on new tasks. Theorem 1 shows that Algorithm 1 can find meta-priors $(\bar{\theta}, \bar{\omega})$ such that the average loss function of the $m$ training tasks can reach the set of $\epsilon$-FOSP. However, it does not provide insights into how the task-specific reward and cost adaptations $(\hat{\varphi}_{m+1}, \hat{\eta}_{m+1}(\hat{\varphi}_{m+1}, \bar{\omega}, D_{m+1}))$, adapted from the learned meta-priors $(\bar{\theta}, \bar{\omega})$, perform on an arbitrary new task $T_{m+1}$ where $D_{m+1}$ is the small data set of the new task $T_{m+1}$. Given that the loss function in our problem is the negative log-likelihood function $L_t$, we use $L_{m+1}(\theta, \omega) = \hat{\varphi}_{m+1}, \omega = \hat{\eta}_{m+1}(\hat{\varphi}_{m+1}, \bar{\omega}, D_{m+1})$ as the metric to reason about the performance of the task-specific adaptations $(\hat{\varphi}_{m+1}, \hat{\eta}_{m+1}(\hat{\varphi}_{m+1}, \bar{\omega}, D_{m+1}))$ on an arbitrary new task $T_{m+1}$. We start our analysis with the definition of stationary state-action distribution. For a given policy $\pi$, the corresponding stationary state-action distribution is $\mu^\pi(s, a) \triangleq (1 - \gamma) \sum_{t=0}^{\infty} \gamma^t P_t^\pi(s, a)$ where $P_t^\pi(s, a)$ is the probability of policy $\pi$ visiting $(s, a)$ at time $t$. We then define the distance between two tasks $T_i$ and $T_j$ as $d(\mu^{\pi_i}, \mu^{\pi_j}) \triangleq \int_{s \in S} \int_{a \in A} |\mu^{\pi_i}(s, a) - \mu^{\pi_j}(s, a)| dads$. Recall that $\pi_i$ is the expert’s policy in task $T_i$. Remark on the definition of the task distance. While it seems natural to use the distance between the reward functions and the distance between the cost functions to define the distance between different tasks, this kind of definition can cause ambiguity because different reward and cost functions may result in the same task. For example, in an unconstrained environment, multiplying the reward function by a constant does not change the task because this will lead to the same optimal policy. Proposition 1. For any new task $T_{m+1}$ and any parameters $(\theta, \omega)$, the following relation holds: $$||\frac{1}{m} \sum_{i=1}^{m} \nabla L_i(\theta, \omega) - \nabla L_{m+1}(\theta, \omega)|| \leq O(d(\frac{1}{m} \sum_{i=1}^{m} \mu^{\pi_i}, \mu^{\pi_{m+1}})).$$ Theorem 2. For an arbitrary new task $T_{m+1}$, the task-specific reward and cost adaptations $(\hat{\varphi}_{m+1}, \hat{\eta}_{m+1}(\hat{\varphi}_{m+1}, \bar{\omega}, D_{m+1}))$ adapted from the learned meta-priors $(\bar{\theta}, \bar{\omega})$ have the property: $$E\left[||\nabla L_{m+1}(\theta, \omega) ||_{\theta=\hat{\varphi}_{m+1}, \omega=\hat{\eta}_{m+1}(\hat{\varphi}_{m+1}, \bar{\omega}, D_{m+1})} || \right] \leq O\left(\epsilon + \frac{1}{m} \sum_{i=1}^{m} d(\mu^{\pi_i}, \mu^{\pi_{m+1}}) + d(\frac{1}{m} \sum_{i=1}^{m} \mu^{\pi_i}, \mu^{\pi_{m+1}})\right).$$ Theorem 2 shows that if the new task’s stationary state-action distribution is sufficiently close to the training tasks’, the task-specific adaptations $(\hat{\varphi}_{m+1}, \hat{\eta}_{m+1})$ are near-stationary. Theorem 3. If the learned meta-priors $(\bar{\theta}, \bar{\omega})$ of Algorithm 1 satisfy $E[\frac{1}{m} \sum_{i=1}^{m} L_i(\bar{\varphi}_i, \eta_i^*(\bar{\varphi}_i, \bar{\omega}))] - \min_{\theta, \omega} \frac{1}{m} \sum_{i=1}^{m} L_i(\bar{\varphi}_i, \eta_i^*(\bar{\varphi}_i, \bar{\omega})) \leq \epsilon$ where $\bar{\varphi}_i = \theta - \alpha \frac{\partial}{\partial \theta} L_i(\theta, \eta_i^*(\theta, \bar{\omega}))$, then it holds that $$E[L_{m+1}(\hat{\varphi}_{m+1}, \hat{\eta}_{m+1}(\hat{\varphi}_{m+1}, \bar{\omega}, D_{m+1}))] - \min_{\theta, \omega} L_{m+1}(\theta, \omega),$$ $$\leq \epsilon + O\left(\frac{1}{m} \sum_{i=1}^{m} d(\mu^{\pi_i}, \mu^{\pi_{m+1}}) + d(\frac{1}{m} \sum_{i=1}^{m} \mu^{\pi_i}, \mu^{\pi_{m+1}})\right).$$ Theorem 3 shows that if the learned meta-priors $(\bar{\theta}, \bar{\omega})$ are $\epsilon$-optimal and the new task is close to the training tasks, the task-specific adaptations $(\hat{\varphi}_{m+1}, \hat{\eta}_{m+1})$ are near-optimal. If the reward and cost functions are linear, we have the following stronger results: Theorem 4. If the expert’s reward and cost functions and the parameterized reward and cost functions $r_\theta, c_\omega$ are linear, we have that (i) $E[|J_{r_{m+1}}(\pi_{\hat{\eta}_{m+1}}(\hat{\varphi}_{m+1})) - J_{r_{m+1}}(\pi_{m+1})|] \leq O(\epsilon + \frac{1}{m} \sum_{i=1}^{m} d(\mu^{\pi_i}, \mu^{\pi_{m+1}}) + d(\frac{1}{m} \sum_{i=1}^{m} \mu^{\pi_i}, \mu^{\pi_{m+1}}))$; (ii) $E[|J_{c_{m+1}}(\pi_{\hat{\eta}_{m+1}}(\hat{\varphi}_{m+1})) - J_{c_{m+1}}(\pi_{m+1})|] \leq O(\epsilon + \frac{1}{m} \sum_{i=1}^{m} d(\mu^{\pi_i}, \mu^{\pi_{m+1}}) + d(\frac{1}{m} \sum_{i=1}^{m} \mu^{\pi_i}, \mu^{\pi_{m+1}}))$. Theorem 4 shows that (i) the cumulative reward difference and (ii) the cumulative cost difference between the adapted policy $\pi_{\hat{\phi}_{m+1};\hat{\eta}_{m+1}}$ and the expert’s policy $\pi_{m+1}$ on an arbitrary new task $T_{m+1}$ can be sufficiently small if the new task is close to the training tasks. 5 EXPERIMENT This section includes two classes of experiments to validate the effectiveness of M-ICRL. The first experiment is conducted on a physical drone and the second experiment is conducted in Mujoco. Due to space limit, the experiment details are included in Appendix B. 5.1 DRONE NAVIGATION WITH OBSTACLES We conduct a navigation experiment on an AR. Drone 2.0 (Figure 1), where the drone (in the yellow box) needs to navigate to the destination (in the green box) while avoiding collision with the obstacles (in the red box). We use an indoor motion capture system “Vicon” to record the trajectories of the drone. For different tasks, we vary the locations of the goal and the obstacles. Given that there is no ground truth reward in this experiment, we use two metrics “constraint violation rate” (CVR) and “success rate” (SR) where CVR is the percentage of the learned policy colliding with any obstacle and SR is the percentage of the learned policy reaching the destination and avoiding obstacles. We use 50 training tasks and 10 test tasks where each test task has only one demonstration. We use three baselines for comparisons: ICRL (Liu & Zhu [2022]) which does not have meta-priors and directly learns from one demonstration without meta-priors, ICRL(pre) which naively pre-trains meta-priors by maximizing the likelihood across all the demonstrations of all the training tasks, Meta-IRL (Xu et al. [2019]) which only learns a reward meta-prior using MAML. We include the experiment results in the second row of Table 1. The experiment details are included in Appendix B. 5.2 MUJOCO EXPERIMENT We also conduct three experiments in Mujoco: Swimmer, HalfCheetah, and Walker. Given that Mujoco can output the ground truth reward, we use cumulative reward (CR) to replace the metric SR. Since there are no constraints in the original Mujoco environments, we add several constraints to the three Mujoco environments. The experiment details are in Appendix B. | Task | Metric | M-ICRL | ICRL | ICRL(pre) | Meta-IRL | Expert | |---------------|--------|--------------|-------------|-------------|--------------|-------------| | | SR | 0.96 ± 0.02 | 0.62 ± 0.07 | 0.71 ± 0.06 | 0.45 ± 0.10 | 1.00 ± 0.00 | | | CVR | 0.02 ± 0.02 | 0.16 ± 0.10 | 0.11 ± 0.08 | 0.33 ± 0.12 | 0.00 ± 0.00 | | Swimmer | CR | 322.56 ± 48.68 | 76.44 ± 18.26 | 199.03 ± 53.24 | 113.66 ± 32.51 | 376.10 ± 51.51 | | | CVR | 0.04 ± 0.02 | 0.22 ± 0.13 | 0.16 ± 0.06 | 0.35 ± 0.18 | 0.00 ± 0.00 | | HalfCheetah | CR | 228.78 ± 54.23 | 60.74 ± 32.63 | 156.89 ± 50.47 | 108.05 ± 36.89 | 264.00 ± 165.56 | | | CVR | 0.03 ± 0.01 | 0.28 ± 0.19 | 0.20 ± 0.11 | 0.31 ± 0.10 | 0.00 ± 0.00 | | Walker | CR | 712.40 ± 96.53 | 144.79 ± 66.37 | 311.86 ± 56.99 | 165.86 ± 70.08 | 752.40 ± 84.71 | | | CVR | 0.00 ± 0.00 | 0.26 ± 0.18 | 0.22 ± 0.09 | 0.42 ± 0.26 | 0.00 ± 0.00 | From Table 1, we observe that M-ICRL achieves the best performance in all the four experiments. Meta-IRL has much higher constraint violation rate than the other three algorithms. This shows the benefits of learning both the reward function and constraints. ICRL(pre), which simply learns meta-priors across all the demonstrations of all the tasks, performs poorly. This illustrates the benefits of our meta-learning design for M-ICRL. We discuss the experiment results in detail in Appendix B. 6 CONCLUSION AND FUTURE WORKS We propose M-ICRL, the first theoretical framework that can learn reward and cost functions of the expert from few demonstrations by first learning meta-priors from other related tasks. It is shown both theoretically and empirically that M-ICRL is effective to adapt to new tasks from few demonstrations. Despite its benefits, one limitation is that M-ICRL assumes that the states and actions are fully observable, however, this may not hold in some real-world problems due to practical issues such as noise. A future direction is to extend M-ICRL to partially observable MDPs (POMDPs). 7 ACKOWLEDGEMENT This work is partially supported by the National Science Foundation through grants ECCS 1846706 and ECCS 2140175. We would like to thank the reviewers for their insightful and constructive suggestions. REFERENCES Pieter Abbeel and Andrew Y Ng. Apprenticeship learning via inverse reinforcement learning. In International Conference on Machine Learning, pp. 1–8, 2004. Stephen P Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge university press, 2004. Suiyao Chen, Jing Wu, Naira Hovakimyan, and Handong Yao. Recontab: Regularized contrastive representation learning for tabular data. In NeurIPS Second Table Representation Learning Workshop, 2023. Giulia Denevi, Carlo Ciliberto, Riccardo Grazzi, and Massimiliano Pontil. Learning-to-learn stochastic gradient descent with biased regularization. In International Conference on Machine Learning, pp. 1566–1575, 2019. Alireza Fallah, Aryan Mokhtari, and Asuman Ozdaglar. On the convergence theory of gradient-based model-agnostic meta-learning algorithms. In International Conference on Artificial Intelligence and Statistics, pp. 1082–1092, 2020. Alireza Fallah, Kristian Georgiev, Aryan Mokhtari, and Asuman Ozdaglar. On the convergence theory of debiased model-agnostic meta-reinforcement learning. In Advances in Neural Information Processing Systems, pp. 3096–3107, 2021a. Alireza Fallah, Aryan Mokhtari, and Asuman Ozdaglar. Generalization of model-agnostic meta-learning algorithms: Recurring and unseen tasks. In Advances in Neural Information Processing Systems, pp. 5469–5480, 2021b. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning, pp. 1126–1135, 2017. Justin Fu, Katie Luo, and Sergey Levine. Learning robust rewards with adverserial inverse reinforcement learning. In International Conference on Learning Representations, 2018. Ziwei Guan, Tengyu Xu, and Yingbin Liang. When will generative adversarial imitation learning algorithms attain global convergence. In International Conference on Artificial Intelligence and Statistics, pp. 1117–1125, 2021. Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. In International Conference on Machine Learning, pp. 1352–1361, 2017. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International Conference on Machine Learning, pp. 1861–1870, 2018. Hongrong Huang and Juergen Sturm. Tum simulator. ROS package at http://wiki.ros.org/tum_simulator, 2014.
Dxl0EuFjlf
Similar to question 1, in many cases, the amplitude shifting loss will compete against the standard MSE loss. For some problems, MSE loss will be optimal and, for the other cases, probably the amplitude shifting loss makes sense. However, how to decide which one to use?
TILDE-Q: A TRANSFORMATION INVARIANT LOSS FUNCTION FOR TIME-SERIES FORECASTING Anonymous authors Paper under double-blind review ABSTRACT Time-series forecasting has gained increasing attention in the field of artificial intelligence due to its potential to address real-world problems across various domains, including energy, weather, traffic, and economy. While time-series forecasting is a well-researched field, predicting complex temporal patterns such as sudden changes in sequential data still poses a challenge with current models. This difficulty stems from minimizing $L_p$ norm distances as loss functions, such as mean absolute error (MAE) or mean square error (MSE), which are susceptible to both intricate temporal dynamics modeling and signal shape capturing. Furthermore, these functions often cause models to behave aberrantly and generate uncorrelated results with the original time-series. Consequently, the development of a shape-aware loss function that goes beyond mere point-wise comparison is essential. In this paper, we examine the definition of shape and distortions, which are crucial for shape-awareness in time-series forecasting, and provide a design rationale for the shape-aware loss function. Based on our design rationale, we propose a novel, compact loss function called TILDE-Q (Transformation Invariant Loss function with Distance Equilibrium) that considers not only amplitude and phase distortions but also allows models to capture the shape of time-series sequences. Furthermore, TILDE-Q supports the simultaneous modeling of periodic and nonperiodic temporal dynamics. We evaluate the efficacy of TILDE-Q by conducting extensive experiments under both periodic and nonperiodic conditions with various models ranging from naive to state-of-the-art. The experimental results show that the models trained with TILDE-Q surpass those trained with other metrics, such as MSE and DILATE, in various real-world applications, including electricity, traffic, economics, weather, and electricity transformer temperature (ETT). 1 INTRODUCTION Time-series forecasting has been a core problem across various domains, including traffic domain (Li et al., 2018; Lee et al., 2020), economy (Zhu & Shasha, 2002), and disease propagation analysis (Matsubara et al., 2014). One of the key challenges in time-series forecasting is the modeling of complex temporal dynamics (e.g., non-stationary signal and periodicity). Temporal dynamics, intuitively, shape, is the most emphasized keywords in time-series domains, such as rush hour of traffic data or abnormal usage of electricity (Keogh et al., 2003; Bakshi & Stephanopoulos, 1994; Weigend & Gershenfeld, 1994; Wu et al., 2021; Zhou et al., 2022). Although deep learning methods are an appealing solution to model complex non-linear temporal dependencies and nonstationary signals, recent studies have revealed that even deep learning is often inadequate to model temporal dynamics. To properly model temporal dynamics, novel deep learning approaches, such as Autoformer (Wu et al., 2021) and FEDFormer (Zhou et al., 2022), have proposed input sequence decomposition. Still, they are trained with $L_p$ norm-based loss function, which could not properly model the temporal dynamics, as shown in Fig. 1, (top). On the other hand, Le Guen & Thome (2019) attempt to model sudden changes in a timely and accurate manner with dynamic time warping (DTW), and Bica et al. (2020) adopt domain adversarial training to learn balanced representations, which is a treatment invariant representations over time. Le Guen & Thome (2019); Bica et al. (2020) try to capture the shape but still have some limitations, as depicted in Fig. 1 (middle), implying the need for further investigation of the shape. Figure 1: Ground-truth and forecasting results of Informer model with three training metrics, as shown in the blue box: (top) MSE, (middle) DTW-based, and (bottom) TILDE-Q loss function. (top, middle) The blue boxes indicates the original intention of loss function (desired) and misbehaviors. The identification of shape, denoting the pattern in time-series data within a given time interval, plays an important role in addressing aforementioned limitation in time-series forecasting problem. It can provide valuable information, such as rise, drop, trough, peak, and plateau. We refer to the prediction as informative when it can appropriately model the shape. In real-world applications, including economics, informative prediction is invaluable for decision-making. To achieve such informative forecasting, a model should account for shape instead of solely aiming to forecast accurate value for each time step. However, existing methods inadequately consider the shape (Wu et al., 2021; Zhou et al., 2022; Bica et al., 2020; Le Guen & Thome, 2019). Moreover, deep learning model tends to opt for an easy learning path (Karras et al., 2019), yielding inaccurate and uninformative forecasting results disregarding the characteristics of time-series data. Fig. 1 illustrates three real forecasting results obtained with Informer (Zhou et al., 2021) and different training metrics. When the mean squared error (MSE) is used as an objective, the model aims to reduce the gap between prediction and ground truth for each time-step. This “point-wise” distance-based optimization has less ability to model shape, resulting in generating uninformative predictions regardless of temporal dynamics (Fig. 1 (top)); the model rarely provides information about the time-series. In contrast, if both gap and shape of the prediction and ground truth are taken into account, the model can achieve high accuracy with proper temporal dynamics, as shown in Fig. 1 (bottom). Consequently, time-series forecasting requires a loss function that consider both point-wise distance (i.e., traditional goal) and shape. In this work, we aim to design a novel objective function that guides models in improving forecasting performance by learning shapes in time-series data. To design a shape-aware loss function, we review existing literature (Esling & Agon, 2012; Bakshi & Stephanopoulos, 1994; Keogh, 2003) and explore the concepts of shapes and distortions that impede appropriate measurement of similarity between two time-series data in terms of shapes (Sec. 3.1, Sec. 3.2, and Sec. 3.3). Based on our investigation, we propose the necessary conditions for constructing an objective function for shape-aware time-series forecasting (Sec. 4.1). Subsequently, we present a novel loss function, TILDE-Q (Transformation Invariant Loss function with Distance EQualibrium), which enables shape-aware representation learning by utilizing three loss terms that are invariant to distortions (Sec. 4.2). For evaluation, we conduct extensive experiments with state-of-the-art deep learning models with TILDE-Q. The experimental results indicate that TILDE-Q is model-agnostic and outperforms MSE and DILATE in MSE and shape-related metrics. Contributions In summary, our study makes the following contributions. (1) We delve into the concept of shape awareness and distortion invariances in the context of time-series forecasting. By thoroughly investigating these distortions, we enhance our understanding of their impact on time-series forecasting problems. (2) We propose and implement TILDE-Q, which has invariances to three distortions and achieves shape-awareness, empowering informative forecasting in a timely manner. (3) We empirically demonstrate that the proposed TILDE-Q allows models to have higher accuracy compared to the models trained with other existing metrics, such as MSE and DILATE. 2 RELATED WORK 2.1 TIME-SERIES FORECASTING Many time-series forecasting methods are available, ranging from traditional models, such as ARIMA model (Box et al., 2015) and hidden Markov model (Pesaran et al., 2004), to recent deep learning models. In this section, we briefly describe the recent deep learning models for time-series forecasting. Motivated by the huge success of recurrent neural networks (RNNs) (Clevert et al., 2016; Li et al., 2018; Yu et al., 2017), many novel deep learning architectures have been developed for improving forecasting performance. To effectively capture long-term dependency, which is a limitation of RNNs, Stoller et al. (2020) have proposed convolutional neural networks (CNNs). However, it is required to stack lots of the same CNNs to capture long-term dependency (Zhou et al., 2021). Attention-based models, including Transformer (Vaswani et al., 2017) and Informer (Zhou et al., 2021), have been another popular research direction in time-series forecasting. Although these models effectively capture temporal dependencies, they incur high computational costs and often struggle to obtain appropriate temporal information (Wu et al., 2021). To cope with the problem, Wu et al. (2021); Zhou et al. (2022) have adopted the input decomposition method, which helps models better encode appropriate information. Other state-of-the-art models adopt neural memory networks (Kaiser et al., 2017; Sukhbaatar et al., 2015; Madotto et al., 2018; Lee et al., 2022), which refer to historical data stored in the memory to generate meaningful representation. 2.2 TRAINING METRICS Conventionally, mean squared error (MSE), $L_p$ norm and its variants are mainstream metrics used to optimize forecasting models. However, they are not optimal for training forecasting models (Esling & Agon, 2012) because the time-series is temporally continuous. Moreover, the $L_p$ norm provides less information about temporal correlation among time-series data. To better model temporal dynamics in time-series data, researchers have used differentiable, approximated dynamic time warping (DTW) as an alternative metric of MSE (Cuturi & Blondel, 2017; Abid & Zou, 2018; Mensch & Blondel, 2018). However, using DTW as a loss function results in temporal localization of changes being ignored. Recently, Le Guen & Thome (2019) have suggested DILATE, a training metric to catch sudden changes of nonstationary signals in a timely manner with smooth approximation of DTW and penalized temporal distortion index (TDI). To guarantee DILATE’s operation in a timely manner, penalized TDI issues a harsh penalty when predictions showed high temporal distortion. However, the TDI relies on the DTW path, and DTW often showed misalignment because of noise and scale sensitivity. Thus, DILATE often loses its advantage with complex data, showing disadvantages at the training. In this paper, we discuss distortions and transformation invariances and design a new loss function that enables models to learn shapes in the data and produce noise-robust forecasting results. 3 PRELIMINARY In this section, we investigate common distortions focusing on the goal of time-series forecasting (i.e., modeling temporal dynamics and accurate forecasting). To clarify the concepts of time-series forecasting and related terms, we first define the notations and terms used (Sec. 3.1). We then discuss common distortions in time-series from the transformation perspective that need to be considered for building a shape-aware loss function (Sec. 3.2) and describe how other loss functions (e.g., dynamic time warping (DTW) and temporal distortion index (TDI)) handle shapes during learning (Sec. 3.3). We will discuss the conditions for effective time-series forecasting in the next session (Sec. 4.1). 3.1 NOTATIONS AND DEFINITIONS Let $X_t$ denote a data point at a time step $t$. We define a time-series forecasting problem as follows: **Definition 3.1.** Given $T$-length historical time-series $X = [X_{t-T+1}, \ldots, X_t], X_i \in \mathbb{R}^F$ at time $i$ and a corresponding $T'$-length future time-series $Y = [Y_{t+1}, \ldots, Y_{t+T'}], Y_i \in \mathbb{R}^C$, time-series forecasting aims to learn the mapping function $f : \mathbb{R}^{T \times F} \rightarrow \mathbb{R}^{T' \times C}$. To distinguish between the label (i.e., ground truth) and prediction time-series data, we note the label data as $Y$ and prediction data as $\hat{Y}$. Next, we set up two goals for time-series forecasting, which require not only precise but also informative forecasting (Wu et al., 2021; Zhou et al., 2022; Le Guen & Thome, 2019) as follows: - The mapping function \( f \) should be learnt to point-wisely reduce distance between \( \hat{Y} \) and \( Y \); - The output \( \hat{Y} \) should have similar temporal dynamics with \( Y \). Temporal dynamics are informative patterns in a time-series, such as rise, drop, peak, and plateau. The optimization for point-wise distance reduction is a conventional method used in the deep learning domain, which can be obtained using the MAE or MSE. However, in a real-world problem, such as traffic speed or stock market prediction, accurate forecasting of temporal dynamics is required. Esling & Agon (2012) also emphasized the measurement of temporal dynamics, as “...allowing the recognition of perceptually similar objects even though they are not mathematically identical.” In this paper, we define temporal dynamics as follows: **Definition 3.2.** Temporal dynamics (or shapes) are informative periodic and nonperiodic patterns in time-series data. In this work, we aim to design a shape-aware loss function that satisfies both goals. To this end, we first discuss distortions that two time-series with similar shapes can have. **Definition 3.3.** Given two time-series \( F \) and \( G \) having similar shapes but not being mathematically identical, let \( H \) is transformation that satisfies \( F = H(G) \). Then, the time-series \( F \) and \( G \) are considered to have a distortion, which can be represented by the transformation \( H \). A distortion can generally be classified as a temporal distortion (i.e., warping) or an amplitude distortion (i.e., scaling) depending on its dimension–time and amplitude. Existing distortions in the data lead to misbehavior of the model, as they distort the measurements to be inaccurate. For example, if we have two time-series \( F \) and \( G = F + k \), which have similar shapes but different means, \( G \) could represent many temporal dynamics of \( F \). However, measurements often evaluate \( F \) and \( G \) as completely different signals and cause misguidance of the model in training (e.g., measuring the distance of \( F \) and \( G \) with MSE). As such, it is important to have measurements that consider a similar shape invariant to distortion. We define a measurement for distortion as: **Definition 3.4.** Let transformation \( H \) represent a distortion \( H \). Then, we call measurement \( D \) invariant to \( H \) if \( \exists \delta > 0 : D(T, H(T)) < \delta \) for any time-series \( T \). ### 3.2 Time-Series Distortions in Transformation Perspectives Distortion, a gap between two similar time-series, affects shape capturing in time-series data. Thus, it is important to investigate different distortions and their impacts on representation learning aspects. There are six common time-series distortions that models encounter during learning (Esling & Agon, 2012; Batista et al., 2014; Berkhin, 2006; Warren Liao, 2005; Kerr et al., 2008)—Amplitude Shifting, Phase Shifting, Uniform Amplification, Uniform Time Scaling, Dynamic Amplification, and Dynamic Time Scaling. Next, we explain each common time-series distortion in terms of transformation with an \( n \)-length time-series \( F(t) = [f(t_1), f(t_2), \ldots, f(t_n)] \), where \( t = [t_1, t_2, \ldots, t_n] \). Fig. 2 presents example distortions, categorized by amplitude and time dimensions. • **Amplitude Shifting** describes how much a time-series shifts against another time-series. This can be described with two time-series and the degree of shifting \( k \): \[ G(t) = F(t) + k = [f(t_1) + k, \ldots, f(t_n) + k], \] where \( k \in \mathbb{R} \) is constant. • **Phase Shifting** is the same type of transformation (i.e., translation) as amplitude shifting, but it occurs along the temporal dimension. This distortion can be represented by two time-series functions with the degree of shift \( k \): \[ G(t) = F(t + k) = [f(t_1 + k), \ldots, f(t_n + k)], \] where \( k \in \mathbb{R} \) is constant. Cross-correlation (Paparrizos & Gravano, 2015; Vlachos et al., 2005) is the most popular measure method that is invariant to this distortion. • **Uniform Amplification** is a transformation that changes the amplitude by multiplication of \( k \in \mathbb{R} \). This distortion can be described with two functions and a multiplication factor \( k \): \[ G(t) = k \cdot F(t) = [k \cdot f(t_1), \ldots, k \cdot f(t_n)]. \] • **Uniform Time Scaling** refers to a uniformly shortened or lengthened \( F(t) \) on the temporal axis. This distortion can be represented as \[ G(t) = [g(t_1), \ldots, g(t_m)], \] where \( g(t_i) = f(t_{k+i}) \) and \( k \in \mathbb{R}^+ \). Although Keogh et al. (2004) have proposed uniform time warping methods to handle this distortion, it still remains a challenging distortion type to measure because of the difficulty in identifying the scaling factor \( k \) without testing all possible cases (Keogh, 2003). • **Dynamic Amplification** is any distortion that occurs through non-zero multiplication along the amplitude dimension. This distortion can be described as follows: \[ G(t) = H(t) \cdot F(t) = [h(t_1) \cdot f(t_1), \ldots, h(t_n) \cdot f(t_n)] \] with function \( h(t) \), such that \( \forall t \in T, h(t) \neq 0 \). Local amplification is representative of such distortions, which still remains challenging to solve. • **Dynamic Time Scaling** refers to any transformation that dynamically lengthens or shortens signals along the temporal dimension, including local time scaling (Batista et al., 2014) and occlusion (Batista et al., 2014; Vlachos et al., 2003). It can be represented as follows: \[ G(t) = F(h(t)) = [f(h(t_1)), \ldots, f(h(t_n))], \] where \( h(t) \) is a positive, strictly increasing function. DTW (Bellman & Kalaba, 1959; Berndt & Clifford, 1994; Keogh & Ratanamahatana, 2005) is the most popular technique invariant to this distortion. Das et al. (1997) have also introduced the longest common subsequence (LCSS) algorithm to tackle occlusion, noise, and outliers in this distortion. Shape-aware clustering (Bellman & Kalaba, 1959; Batista et al., 2014; Paparrizos & Gravano, 2015; Berkhin, 2006; Warren Liao, 2005; Kerr et al., 2008) and classification (Xi et al., 2006; Batista et al., 2014; Srisai & Ratanamahatana, 2009) tasks that consider shapes have been extensively studied. However, only a few studies exist for time-series forecasting tasks, including Le Guen & Thome (2019) that utilize DTW and TDI for modeling temporal dynamics. Next, we describe the MSE and DILATE, proposed by Le Guen & Thome (2019), and discuss their invariance to distortions. ### 3.3 Distortion Handling in Current Time-Series Forecasting Objectives Many measurement metrics have been used in the time-series forecasting domain, and those based on the \( L_p \) distance, including Euclidean distance, are widely used to handle time-series data. However, such metrics are not invariant to the aforementioned distortions (Ding et al., 2008; Le Guen & Thome, 2019) because of their point-wise mapping. In particular, since \( L_p \) distance compares the values per time step, it cannot handle temporal distortions appropriately and is vulnerable to data scaling. Le Guen & Thome (2019) have proposed a loss function called DILATE to overcome the inadequate characteristic in the \( L_p \) distance metric by recognizing temporal dynamics with DTW and TDI. In terms of transformation, DILATE handles dynamic time scaling, especially local time scaling with DTW, and phase shifting with penalized TDI, defined as follows: \[ L_{DILATE}(\hat{y}_t, y_t) := -\gamma \log \left( \sum_{A \in A_{k,k}} e^{-\frac{\langle A, \alpha \Delta(\hat{y}_t, y_t) + (1-\alpha)\Omega \rangle}{\gamma}} \right), \] where \( A, \Delta(\hat{y}_t, y_t), \Omega \) are the warping path, cost matrix, and penalization matrix, respectively. While DILATE performs better than existing methods, it has a limitation from the perspective of invariance. DILATE highly depends on DTW, which allows for the dynamic alignment of the time-series for a predefined window. In such windows, DTW can align the signal regardless of its information (e.g., periodicity). As a result, the model creates misbehavior that can cheat DTW within the window, as shown in Fig. 1 middle. DTW’s scale and noise sensitivity are also problematic. DTW computes the Euclidean distance of two time-series after its temporal alignment in dynamic programming, and the alignment relies on the distance function. Consequently, the dynamic alignment of DTW can be properly achieved only when the two time-series have the same range (Esling & Agon, 2012; Bellman & Kalaba, 1959). This means that it hardly achieves invariance to amplitude distortion without appropriate pre-processing. Gong & Chen (2017) also show that DTW poorly matches the prediction and target (i.e., ground truth) time-series with amplitude shifting. Even when the target time-series is aligned with normalization, the appropriate alignment of the prediction and target time-series cannot be guaranteed because of DTW’s high sensitivity to noise. As a result, DILATE can generate poor alignment results, which can cause wrong TDI optimization, producing incorrect results and instability during the optimization steps. To design an effective shape-aware loss function, we must understand the measures and in which cases they have transformation invariances. In the next section, we interpret transformations from a time-series forecasting viewpoint and discuss the types of transformations that should be considered in objective function design. 4 METHODS In this section, we discuss and propose the design rationale for the shape-aware loss function (Sec. 4.1). Based on the design rationale, we implement a novel loss function, TILDE-Q (a Transformation Invariant Loss function with Distance Equilibrium), which allows models to perform shape-aware time-series forecasting based on three distortion invariances. 4.1 TRANSFORMATION INVARIANCES IN TIME-SERIES FORECASTING In the time-series domain, data often have various distortions; thus, measurements need to satisfy numerous transformation invariances for meaningfully modeling temporal dynamics. As discussed in Sec. 3.1, we set the goals of time-series forecasting as (1) point-wisely reducing the gap between the prediction and target time-series and (2) preserving the temporal dynamics of the target time-series. To satisfy both of them, we have to consider (1) a method that does not negatively impact on the traditional goal of accurate time-series forecasting and (2) distortions that play a crucial role in capturing the temporal dynamics of the target time-series. In this section, we review all six distortions based on whether their corresponding invariance is feasible to be a loss function for time-series forecasting, discuss the loss function’s benefits and trade-offs, and identify appropriate distortions to be considered in time-series forecasting. Amplitude Shifting In a wide range of situations, it is beneficial to capture the trends of time-series sequences despite shifts in amplitude. Thus, being invariant to amplitude shifting in a loss function is highly advantageous in time-series forecasting: (1) shape awareness invariant to amplitude shifting, (2) accurate deviation of values in modeling, and (3) effective on-time prediction of the peak or sudden changes. To guarantee an amplitude shifting invariance in the optimization stage, the loss function should induce an equal gap $k$ between the prediction and ground truth data in each step. Specifically, the loss function considering amplitude shifting should satisfy: $$L(Y, \hat{Y}) = 0 \iff \forall i \in [1, \ldots, n], d(y_i, \hat{y}_i) = k,$$ where $k \in \mathbb{R}$ is an arbitrary and equal gap, and $d(y_i, \hat{y}_i)$ is a signed distance with a boundary $y_i > \hat{y}_i$. By allowing tolerance between the prediction and target time-series, models can follow trends in time-series instead of predicting exact values point-wisely. In short, unlike existing loss functions, which handle only point-wise distance (e.g., DTW), we should deal with both point-wise distance and its relational distance values to guarantee amplitude shifting. Phase Shifting There are some forecasting tasks whose main objectives concern accurate forecasting of peaks and periodicity in time-series (e.g., heartbeat data and stock price data). For such tasks, phase shifting invariance is an optimal solution for (1) modeling periodicity, regardless of the translation on the temporal axis, and (2) having precise statistics with shapes, such as peak and plateau values. To be invariant to phase shifting, the loss function should satisfy $$L(Y, \hat{Y}) = 0 \iff Y \text{ and } \hat{Y} \text{ have the same dominant frequency}.$$ Note that Eq. 2 allows a similar shape as the target time-series in forecasting, not exactly the same shape (e.g., $\sin(x)$ and $2\sin(x + x_0)$ with the same dominant frequency). Uniform Amplification This proposition can be utilized in the case of sparse data that contains a significant number of zeros. By adopting uniform amplification invariance, models are able to focus on non-zero sequences, whereas this proposition allows models to receive less penalty in zero sequences. Since it guarantees shape awareness with a multiplication factor in a timely manner, as shown in Fig. 2, invariance for uniform amplification fits well. To have a model trained with uniform amplification invariance, the loss function should satisfy the following proposition: \[ L(Y, \hat{Y}) = 0 \iff \forall i \in [1, \ldots, n], \frac{y_i}{\hat{y}_i} = k(\hat{y}_i \neq 0). \] (3) Uniform Time Scaling, Dynamic Amplification, and Dynamic Time Scaling After careful consideration, we conclude that uniform time scaling, dynamic amplification, and dynamic time scaling are incompatible for optimization. The reasons are described below. To achieve invariance for uniform time scaling, the loss function should satisfy below: \[ L(Y, \hat{Y}) = 0 \iff \exists c \in \mathbb{Z}^+ : \{c|y_i = \hat{y}_{ci}\} \cup \{c|y_{ci} = \hat{y}_i\} \forall i \in [0, 1, \ldots, T']. \] This proposition will negatively influence the original temporal dynamics, considering that it creates the tolerance for mispredicting periodicity (e.g., daily periodic signals) and cannot identify events (e.g., abrupt changing values) in a timely manner. In summary, it hinders models from capturing shapes and corrupts periodic information. For both dynamic amplification and time scaling, the loss functions are zero for all pairs when there is no limit for tolerance. Formally, the proposition for dynamic amplification invariance is as follows: \[ L(Y, \hat{Y}) = 0 \iff \forall c_i \in \mathbb{R} : y_i = c_i \hat{y}_i, \] If a loss function satisfies this proposition without bound for \( c_i \), it is always zero because there always exists \( c_i = y_i / \hat{y}_i \), except \( \hat{y}_i = 0 \). Therefore, it is not able to provide any information because all random values could be an optimal solution. The same situation happens for the dynamic time scaling if we do not limit the window. Consequently, all three objectives—uniform time scaling, dynamic amplification, and dynamic time scaling are unsuitable to be objectives in time-series forecasting. 4.2 TILDE-Q: TRANSFORMATION INVARIANT LOSS FUNCTION WITH DISTANCE EQUILIBRIUM To build a transformation invariant loss function, we need to design a loss function that satisfies the proposition for amplitude shifting (Eq. 1), phase shifting (Eq. 2), and uniform amplification shifting invariance (Eq. 3), as discussed in Sec. 4.1. Furthermore, the loss function should guarantee a small \( L_p \) norm between prediction and label, which is the traditional goal of forecasting. Both conditions are hard to simultaneously satisfy by existing loss functions, such as the MSE or DILATE. To handle all three distortions while considering traditional goal, we build three objective functions (\( a.shift \), phase, and amp losses) that can achieve one or more invariance by using softmax, Fourier coefficient, and autocorrelation to design a loss function. Amplitude Shifting Invariance with Softmax (Amplitude Shifting) To strengthen amplitude shifting invariance, we design a loss function that satisfies Eq. 1. This means that \( d(y_i, \hat{y}_i) \) must have the same value for all \( i \). To satisfy this condition, we utilize the softmax function: \[ L_{a.shift}(Y, \hat{Y}) = T' \sum_{i=1}^{T'} \left[ \frac{1}{T'} - \text{Softmax}(d(y_i, \hat{y}_i)) \right], \text{Softmax}(d(y_i, \hat{y}_i)) = \frac{e^{d(y_i, \hat{y}_i)}}{\sum_{j=1}^{T'} e^{d(y_j, \hat{y}_j)}} \] (4) where \( T' \), Softmax, and \( d(\cdot, \cdot) \) are the sequence length, softmax function, and signed distance function, respectively. Because softmax produces the proportion of each value, it can obtain the optimal solution only when it satisfies Eq. 1. Since Softmax outputs the relative values, it could handle any gap \( k \). Invariances with Fourier Coefficients (Phase Shifting) As discussed in Sec. 4.1, a potential method that can be used to obtain phase shifting invariance is the use of Fourier coefficients. According to the literature (NG & GOLDBERGER, 2007), the original time-series can be reconstructed with a few dominant frequencies. Thus, we utilize the gap between dominant Fourier coefficients of Table 1: Experimental results on six real-world datasets (four cases) with four the state-of-the-art models and three training metrics. For all experiment, we set input sequence length $T = 96$. | Model | N-Beats | Informer | Autoformer | FEDformer | |-------|---------|----------|------------|-----------| | | MSE | LCSS | MSE | LCSS | MSE | LCSS | MSE | LCSS | MSE | LCSS | MSE | LCSS | | Metric| | | | | | | | | | | | | | F1@2 | 96 | 0.187 | 0.468 | 0.310 | 0.487 | 0.155 | 0.586 | 0.246 | 0.463 | 0.328 | 0.505 | 0.176 | 0.537 | | | 192 | 0.289 | 0.544 | 0.410 | 0.538 | 0.213 | 0.537 | 0.308 | 0.443 | 0.416 | 0.586 | 0.295 | 0.416 | | | 336 | 0.361 | 0.545 | 0.458 | 0.531 | 0.283 | 0.537 | 0.308 | 0.443 | 0.416 | 0.586 | 0.295 | 0.416 | | | 720 | 0.434 | 0.453 | 0.671 | 0.457 | 0.304 | 0.532 | 0.287 | 0.442 | 0.420 | 0.643 | 0.248 | 0.287 | | F1@1 | 96 | 0.122 | 0.576 | 0.205 | 0.510 | 0.128 | 0.616 | 0.115 | 0.670 | 0.226 | 0.526 | 0.131 | 0.698 | | | 192 | 0.182 | 0.458 | 0.250 | 0.481 | 0.170 | 0.619 | 0.136 | 0.656 | 0.280 | 0.500 | 0.176 | 0.655 | | | 336 | 0.240 | 0.458 | 0.325 | 0.510 | 0.228 | 0.619 | 0.136 | 0.656 | 0.280 | 0.500 | 0.176 | 0.655 | | | 720 | 0.306 | 0.458 | 0.384 | 0.510 | 0.228 | 0.619 | 0.136 | 0.656 | 0.280 | 0.500 | 0.176 | 0.655 | | Exchange | 96 | 0.366 | 0.658 | 1.115 | 0.507 | 0.318 | 0.722 | 0.279 | 0.703 | 0.1085 | 0.645 | 0.280 | 0.727 | | | 192 | 0.430 | 0.621 | 1.185 | 0.497 | 0.338 | 0.718 | 0.279 | 0.706 | 1.120 | 0.605 | 0.307 | 0.733 | | | 336 | 0.504 | 0.621 | 1.255 | 0.497 | 0.338 | 0.718 | 0.279 | 0.706 | 1.120 | 0.605 | 0.307 | 0.733 | | | 720 | 0.574 | 0.571 | 1.306 | 0.533 | 0.454 | 0.696 | 0.641 | 0.456 | 1.370 | 0.550 | 0.467 | 0.629 | | Weather | 96 | 0.234 | 0.830 | 1.332 | 0.525 | 0.229 | 0.837 | 0.261 | 0.833 | 0.261 | 0.833 | 0.258 | 0.836 | | | 192 | 0.345 | 0.792 | 1.460 | 0.521 | 0.339 | 0.821 | 0.331 | 0.811 | 0.297 | 0.812 | 0.299 | 0.817 | | | 336 | 0.430 | 0.792 | 1.532 | 0.518 | 0.347 | 0.815 | 0.331 | 0.811 | 0.297 | 0.812 | 0.299 | 0.817 | | | 720 | 0.506 | 0.421 | 1.003 | 0.431 | 0.002 | 0.508 | 0.005 | 0.452 | 0.010 | 0.479 | 0.003 | 0.552 | ground truth and prediction as our objective function for achieving phase shifting invariance. For the other frequencies, we use the norm of the prediction sequence to reduce the value of the Fourier coefficient. Consequently, this loss function keeps the temporal dynamics of the original time series (i.e., dominant frequencies) and enables noise robustness by reducing white noises in non-dominant frequencies. We achieve phase shifting invariance by optimizing the following loss function: $$L_{\text{phase}}(Y, \hat{Y}) = \begin{cases} ||F(Y) - F(\hat{Y})||_p, & \text{dominant freq.} \\ ||F(\hat{Y})||_p, & \text{otherwise} \end{cases}$$ where $|| \cdot ||_p$ is the $L_p$ norm. To obtain the dominant frequency terms, we calculate the norm of the Fourier coefficient for each frequency and filter them with the squared root of sequence length, $\sqrt{T'}$. We also guarantee the minimum number of dominant frequencies as $\sqrt{T'}$. This loss function obtains uniform amplification invariance through the application of a normalization technique to Fourier coefficients. For example, $\sin x$ and $c \cdot \sin x$ have the same Fourier coefficients if appropriately normalized. In summary, from Eq. 5, we can obtain (1) invariance for phase shifting, (2) invariance for uniform amplification, and (3) robustness to noise. Invariances with Autocorrelation (Uniform Amplification) Although Fourier coefficients can be considered a reasonable solution to determine the periodicity of the target time-series, they are not completely invariant to phase shifting for three reasons: (1) the data statistics (e.g., mean and variance) keep changing, (2) such changing statistics also cause changes in Fourier coefficients even at the same frequency, and (3) objectives only with a norm of Fourier coefficient cannot fully represent the original time-series. Thus, we introduce an objective based on normalized cross-correlation, which satisfies Eq. 2 for a periodic signal: $$L_{\text{amp}}(Y, \hat{Y}) = ||R(Y, Y) - R(Y, \hat{Y})||_p,$$ where $R(\cdot, \cdot)$ is a normalized cross-correlation function. This loss function helps predicted sequences mimic label sequences by calculating the difference between the autocorrelation of the label sequences and the cross-correlation between the label and predicted sequences. Therefore, the label and prediction have similar temporal dynamics, regardless of phase shifting or uniform amplification. In summary, we introduce TILDE-Q, combining Eq. 4, Eq. 5, and Eq. 6 as follows: $$L_{\text{TILDE-Q}}(Y, \hat{Y}) = \alpha L_{\text{a.shift}}(Y, \hat{Y}) + (1 - \alpha)L_{\text{phase}}(Y, \hat{Y}) + \gamma L_{\text{amp}}(Y, \hat{Y}),$$ where $\alpha \in [0, 1]$ and $\gamma$ are hyperparameters. 5 EXPERIMENTS In this section, we present the results of our comprehensive experiments, demonstrating the effectiveness of TILDE-Q and the importance of transformation invariance. Table 2: Experimental results of short-term time-series forecasting on the three datasets with sequence-to-sequence GRU model. | Methods | GRU + MSE | GRU + DILATE | GRU + TILDE-Q | |---------|-----------|-------------|--------------| | Eval | MSE | DTW | TDI | LCSS | MSE | DTW | TDI | LCSS | MSE | DTW | TDI | LCSS | | Synthetic | 0.0107 | 3.5080 | 1.0392 | 0.3523 | 0.0130 | 3.4005 | 1.1242 | 0.3825 | 0.0119 | 3.2873 | 1.1564 | 0.3811 | | ECG5000 | 0.2152 | 1.9718 | 0.8442 | 0.7743 | 0.8270 | 3.9579 | 2.0281 | 0.4356 | 0.2141 | 1.9575 | 0.7714 | 0.7773 | | Traffic | 0.0070 | 1.4628 | 0.2343 | 0.7209 | 0.0095 | 1.6929 | 0.2814 | 0.6806 | 0.0072 | 1.4600 | 0.2276 | 0.7220 | **Experimental Setup** We conduct the experiments with four state-of-the-art models—Informer (Zhou et al., 2021), N-Beats (Oreshkin et al., 2020), Autoformer (Wu et al., 2021), and FEDformer (Zhou et al., 2022)—and a simple sequence-to-sequence gated recurrent unit (GRU) model. For model training, we use seven real-world datasets—ECG5000, Traffic, ETTh2, ETTm2, ECL, Exchange, and Weather—and one synthetic dataset, Synthetic. We repeat each experiment with a model and dataset 10 times in combination with three different objective functions. Appendix A provides detailed explanations of the datasets, hyperparameter settings, model, and source code. We also provide experimental results with NSFormer (Liu et al., 2022) in Appendix. **Evaluation Metrics** In this experiment, we evaluate TILDE-Q with three evaluation metrics: mean squared error (MSE), dynamic time warping (DTW), and its corresponding temporal distortion index (TDI), all of which are referred from Le Guen & Thome (2019). As DTW is sensitive to noise and generates incorrect paths when one of the time-series data is noisy (as discussed in Sec. 3.3), we additionally use the longest common subsequence (LCSS) for comparison, which is more robust to outliers and noise (Esling & Agon, 2012). The longer the matched subsequences, the higher the LCSS score will be achieved in modeling the shapes. For state-of-the-art models, we report the MSE and LCSS. For detailed results, including those for DTW and TDI, please refer to Appendix B. **Experimental Results and Analysis** Table 2 shows the results of the short-term forecasting performance of the GRU model optimized with the MSE, DILATE, and TILDE-Q metrics. With the Synthetic dataset, each metric used shows its own benefits. This result indicates that loss functions with shape similarity or MSE have their specialty for shape and exact value, respectively. It also means a better MSE does not guarantee a better solution for temporal dynamics. Moreover, since the model is evaluated with real-world datasets, it is revealed that TILDE-Q outperforms other objective functions in most evaluation metrics. These results indicate that our approach to learning shapes in time-series data achieves better results than existing methods for forecasting. DILATE does not show impressive performance with ECG5000 due to its high sensitivity to noise, as discussed in Sec. 3.3. Table 1 summarizes the experimental results obtained with the four state-of-the-art models, N-Beats, Informer, Autoformer, and FEDformer. The models make predictions for both short-term ($L=96$) and long-term ($L$ up to 720). Thus, we can investigate their performances with different forecasting difficulties. In most datasets, the models with TILDE-Q outperform those with other training metrics. Especially for long-term forecasting, N-Beats and Informer with TILDE-Q show significantly improved performance compared to those with the other metrics. Appendix B presents some visual examples and more detailed analysis, qualitative experiments with example visualizations, and ablation study results. These results imply that TILDE-Q improves the performance of the models in learning temporal dynamics, including the LCSS of N-Beats (improved over 10%). **6 CONCLUSION AND FUTURE WORK** We propose TILDE-Q that allows shape-aware time-series forecasting in a timely manner. To design TILDE-Q, we review existing transformations in time-series data and discuss the conditions that ensure transformation invariance during optimization tasks. The designed TILDE-Q is invariant to amplitude shifting, phase shifting, and uniform amplification, ensuring a model better captures shapes in time-series data. To prove the effectiveness of TILDE-Q, we conduct comprehensive experiments with state-of-the-art models and real-world datasets. The results indicate that the model trained with TILDE-Q generates more timely, robust, accurate, and shape-aware forecasting in both short-term and long-term forecasting tasks. We conjecture that this work can facilitate future research on transformation invariances and shape-aware forecasting. REFERENCES Abid, A. and Zou, J. Y. Learning a warping distance from unlabeled time series using sequence autoencoders. In *Advances in Neural Information Processing Systems*, volume 31, pp. 10568–10578, 2018. Bakshi, B. and Stephanopoulos, G. Representation of process trends—iv. induction of real-time patterns from operating data for diagnosis and supervisory control. *Computers & Chemical Engineering*, 18(4):303–332, 1994. Batista, G. E. A. P. A., Keogh, E. J., Tataw, O. M., and de Souza, V. M. A. CID: an efficient complexity-invariant distance for time series. *Data Mining and Knowledge Discovery*, 28(3):634–669, 2014. doi: 10.1007/s10618-013-0312-3. Bellman, R. and Kalaba, R. On adaptive control processes. *IRE Transactions on Automatic Control*, 4(2):1–9, 1959. Berkhin, P. A survey of clustering data mining techniques. In *Grouping Multidimensional Data - Recent Advances in Clustering*, pp. 25–71. Springer, 2006. Berndt, D. J. and Clifford, J. Using dynamic time warping to find patterns in time series. In *Proceedings of the International Conference on Knowledge Discovery and Data Mining*, AAAIWS’94, pp. 359–370. AAAI Press, 1994. Bica, I., Alaa, A. M., Jordon, J., and van der Schaar, M. Estimating counterfactual treatment outcomes over time through adversarially balanced representations. In *International Conference on Learning Representations*, 2020. Box, G. E. P., Jenkins, G. M., Reinsel, G. C., and Ljung, G. M. *Time series analysis: forecasting and control*. John Wiley, 2015. Clevert, D., Unterthiner, T., and Hochreiter, S. Fast and accurate deep network learning by exponential linear units (elus). In *Proceedings of the International Conference on Learning Representations*, 2016. Cuturi, M. and Blondel, M. Soft-dtw: A differentiable loss function for time-series. In *Proceedings of the 34th International Conference on Machine Learning*, ICML’17, pp. 894–903, 2017. Das, G., Gunopulos, D., and Mannila, H. Finding similar time series. In *Principles of Data Mining and Knowledge Discovery*, pp. 88–100, 1997. Dau, H. A., Bagnall, A., Kamgar, K., Yeh, C.-C. M., Zhu, Y., Gharghabi, S., Ratanamahatana, C. A., and Keogh, E. The ucr time series archive. *IEEE/CAA Journal of Automatica Sinica*, 6(6):1293–1305, 2019. Ding, H., Trajcevski, G., Scheuermann, P., Wang, X., and Keogh, E. Querying and mining of time series data: Experimental comparison of representations and distance measures. *Proceedings of the VLDB Endowment*, 1(2):1542–1552, 2008. Esling, P. and Agon, C. Time-series data mining. *ACM Computing Surveys*, 45(1), 2012. Gong, Z. and Chen, H. Dynamic state warping. *CoRR*, abs/1703.01141, 2017. Kaiser, L., Nachum, O., Roy, A., and Bengio, S. Learning to remember rare events. In *5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings*, 2017. Karras, T., Laine, S., and Aila, T. A style-based generator architecture for generative adversarial networks. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, 2019. Keogh, E. J. Efficiently finding arbitrarily scaled patterns in massive time series databases. In *Knowledge Discovery in Databases: PKDD 2003*, volume 2838 of *Lecture Notes in Computer Science*, pp. 253–265, 2003.
Fq8tKtjACC
Another point of contention is the authors' assertion that phi-1 consumed less compute for training. They overlook the computational resources expended in creating their training data, and more importantly, the compute required to train the foundational LLMs.
TEXTBOOKS ARE ALL YOU NEED Anonymous authors Paper under double-blind review ABSTRACT We introduce phi-1, a new large language model for code, with significantly smaller size than competing models: phi-1 is a Transformer-based model with 1.3B parameters, trained for 4 days on 8 A100s, using a selection of “textbook quality” data from the web (6B tokens) and synthetically generated textbooks and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays surprising emergent properties compared to phi-1-base, our model before our fine-tuning stage on a coding exercises dataset, and phi-1-small, a model with 350M parameters trained with the same pipeline that still achieves 45% on HumanEval. 1 INTRODUCTION The art of training large artificial neural networks has made extraordinary progress in the last decade, especially after the discovery of the Transformer architecture [Vaswani et al., 2017], yet the science behind this success remains limited. Amidst a vast and confusing array of results, a semblance of order emerged around the same time as Transformers were introduced, namely that performance improves somewhat predictably as one scales up either the amount of compute or the size of the network [Hestness et al., 2017], a phenomenon which is now referred to as scaling laws [Kaplan et al., 2020]. The subsequent exploration of scale in deep learning was guided by these scaling laws [Brown et al., 2020], and discoveries of variants of these laws led to rapid jump in performances [Hoffmann et al., 2022]. In this work, following the footsteps of Eldan and Li [Eldan & Li, 2023], we explore the improvement that can be obtained along a different axis: the quality of the data. It has long been known that higher quality data leads to better results, e.g., data cleaning is an important part of modern dataset creation [Raffel et al., 2020], and it can yield other side benefits such as somewhat smaller datasets [Longpre et al., 2023; Yu et al., 2023] or allowing for more passes on the data [Muenmghoff et al., 2023]. The recent work of Eldan and Li on TinyStories (a high quality dataset synthetically generated to teach English to neural networks) showed that in fact the effect of high quality data extends well past this: improving data quality can dramatically change the shape of the scaling laws, potentially allowing to match the performance of large-scale models with much leaner training/models. In this work we go beyond the initial foray of Eldan and Li to show that high quality data can even improve the SOTA of large language models (LLMs), while dramatically reducing the dataset size and training compute. Importantly, smaller models requiring less training can significantly reduce the environmental cost of LLMs [Bender et al., 2021]. We focus our attention on LLMs trained for code, and specifically writing simple Python functions from their docstrings as in [Chen et al., 2021]. The evaluation benchmark proposed in the latter work, HumanEval, has been widely adopted for comparing LLMs’ performance on code. We demonstrate the power of high quality data in breaking existing scaling laws by training a 1.3B-parameter model, which we call phi-1, for roughly 8 passes over 7B tokens (slightly over 50B total tokens seen) followed by finetuning on less than 200M tokens. Roughly speaking we pretrain on “textbook quality” data, both synthetically generated (with GPT-3.5) and filtered from web sources, and we finetune on “textbook-exercise-like” data. Despite being several orders of magnitude smaller than competing models, both in terms of dataset and model size (see Table 1), we attain 50.6% pass@1 accuracy on HumanEval and 55.5% pass@1 accuracy on MBPP (Mostly Basic Python Programs), which are one of the best self-reported numbers using only one LLM generation. In Section 2, we give some details of our training process, and we discuss evidence for the importance of our data selection process in achieving this result. Moreover, despite being trained on much fewer tokens compared to existing models, phi-1 still displays emergent properties. In Section 3 we discuss these | Date | Model | Model size (Parameters) | Dataset size (Tokens) | HumanEval (Pass@1) | MBPP (Pass@1) | |----------|------------------------|-------------------------|-----------------------|--------------------|---------------| | 2021 Jul | Codex-300M [Chen et al., 2021] | 300M | 100B | 13.2% | - | | 2021 Jul | Codex-12B [Chen et al., 2021] | 12B | 100B | 28.8% | - | | 2022 Mar | CodeGen-Mono-350M [Nijkamp et al., 2023b] | 350M | 577B | 12.8% | - | | 2022 Mar | CodeGen-Mono-16.1B [Nijkamp et al., 2023b] | 16.1B | 577B | 29.3% | 35.3% | | 2022 Apr | PalLM-Coder [Chowdhery et al., 2022] | 540B | 780B | 35.9% | 47.0% | | 2022 Sep | CodeGex [Zheng et al., 2023] | 13B | 850B | 22.9% | 24.4% | | 2022 Nov | GPT-3.5 [OpenAI, 2023] | 175B | N.A. | 47% | - | | 2022 Dec | SantaCoder [Aliai et al., 2023] | 1.1B | 236B | 14.0% | 35.0% | | 2023 Mar | GPT-4 [OpenAI, 2023] | N.A. | N.A. | 67% | - | | 2023 Apr | Repli [Repli, 2023] | 2.7B | 525B | 21.9% | - | | 2023 Apr | Repli+mine [Repli, 2023] | 2.7B | 525B | 30.5% | - | | 2023 May | CodeGen-1B [Nijkamp et al., 2023a] | 1B | N.A. | 10% | - | | 2023 May | CodeGen-2.7B [Nijkamp et al., 2023a] | 7B | N.A. | 19.1% | - | | 2023 May | StarCoder [Li et al., 2023] | 15.5B | IT | 33.6% | 52.7% | | 2023 May | StarCoder-Prompted [Li et al., 2023] | 15.5B | IT | 40.8% | 49.5% | | 2023 May | PalLM-2 [Anil et al., 2023] | N.A. | N.A. | 37.6% | 50.0% | | 2023 May | CodeT5 [Wang et al., 2023] | 2B | 52B | 24.2% | - | | 2023 May | InstructCodeT5+ [Wang et al., 2023] | 16B | 52B | 35.0% | - | | 2023 Jun | WizardCoder [Luó et al., 2023] | 16B | IT | 57.3% | 51.8% | | 2023 Jun | phi-1 | 1.3B | 7B | 50.6% | 55.5% | Table 1: We use self-reported scores whenever available. Despite being trained at vastly smaller scale, phi-1 outperforms several competing models on HumanEval and MBPP. Emergent properties, and in particular we confirm the hypothesis that the number of parameters plays a key role in emergence (see e.g., [Wei et al., 2022]), by comparing the outputs of phi-1 with those of phi-1-small, a model trained with the same pipeline but with only 350M parameters. The methodology used in this section is reminiscent of the Sparks of AGI paper [Bubeck et al., 2023] for beyond-benchmark evaluation. Finally in Section 4 we discuss alternative benchmarks to evaluate the model and in Section 5 we study possible contamination of our training data with respect to HumanEval. We release the model for usage and evaluation by the broader community, but omit some details of the synthetic data generation, for proprietary reasons.\footnote{In recent past, other highly influential papers like [Brown et al., 2020] and [Lewkowycz et al., 2022] have also similarly withheld dataset details for competitive advantage.} More related works. Our work is part of the recent program of using LLMs for program synthesis, see [Chen et al., 2021; Nijkamp et al., 2022] for more references on this. Our approach is also part of the emerging trend of using existing LLMs to synthesize data for the training of new generations of LLMs, [Wang et al., 2022; Taori et al., 2023; Mukherjee et al., 2023; Lin et al., 2023; Jung et al., 2023]. There is an ongoing debate about whether such “recursive training” might lead to narrower scope for the resulting LLM [Shumailov et al., 2023; Gudiband et al., 2023], see [Mukherjee et al., 2023] for a counterviewpoint. Note that in this paper we focus on a narrow task, similarly to [Jung et al., 2023], where it is plausible to improve upon the teacher LLM (as is argued in the latter paper). 2 TRAINING DETAILS AND THE IMPORTANCE OF HIGH-QUALITY DATA As alluded to in the title of the paper, the central ingredient our model relies on textbook-quality training data. We devote this section primarily to our data curation ideas.\footnote{Our model architecture and training methods are largely conventional and discussed in the Appendix D.} Previous work used standard sources of text and code data for code generation, such as The Stack [Kocetkov et al., 2022] and other web-based datasets (e.g., StackOverflow). While these form large and diverse corpus covering broad range of topics and use cases, we argue that these sources are not optimal for teaching the model how to reason and plan algorithmically. Based on manual inspection we observe that many of these snippets are not very instructive for learning the basics of coding: - Many samples are not self-contained, meaning that they depend on other modules or files that are external to the snippet, making them hard to understand without additional context. - Typical examples do not involve any meaningful computation, but rather consist of trivial or boilerplate code, such as defining constants, parameters, or configuring GUI elements. - Samples that do contain algorithmic logic are often buried inside complex or poorly documented functions, making them difficult to follow or learn from. - The examples are skewed towards certain topics or use cases, resulting in an unbalanced distribution of coding concepts and skills across the dataset. Figure 1: Pass@1 accuracy (%) on HumanEval. The grouping of bar plots correspond to the usual scaling dimensions of either increasing the compute time (more passes on the data, here from 26B tokens seen to 76B) or increasing the number of parameters of the model (here from 350M to 1.3B). Each column within a group corresponds to different training datasets: (A) The first (orange) column represents the performance of models trained on the standard datasets of deduplicated Python files from The Stack and StackOverflow; (B) The second (light green) column represents the performance of models trained with our new dataset composition CodeTextbook; (C) Finally, the third (dark green) column corresponds to the respective second column models finetuned on our new CodeExercises dataset. For the 1.3B models, phi-1 and phi-1-base are checkpoints after training on 51B tokens and The Stack+ model was trained for 76B tokens. We highlight that even without any finetuning, our phi-1-base model trained on CodeTextbook dataset achieves 29% HumanEval performance with a mere 1.3B parameter model. The previous smallest model that achieves close to 30% performance on HumanEval was Replit-Finetuned at 2.7B parameters, which was trained with 100 times more training tokens than us [Replit(2023)]. On top of this, finetuning on our CodeExercises dataset to obtain phi-1 not only gives us our top performance of 51% on HumanEval, but also unlocks unexpected coding capabilities (see Section 3). One can only imagine how frustrating and inefficient it would be for a human learner to try to acquire coding skills from these datasets, as they would have to deal with a lot of noise, ambiguity, and incompleteness in the data. We hypothesize that these issues also affect the performance of language models, as they reduce the quality and quantity of the signal that maps natural language to code. We conjecture that language models would benefit from a training set that has the same qualities as a good “textbook”: it should be clear, self-contained, instructive, and balanced. In this work, we address this challenge directly and show that by intentionally selecting and generating high-quality data, we can achieve state-of-the-art results on code-generation tasks with a much smaller model and less compute than existing approaches. Our training relies on three main datasets: - A filtered code-language dataset, which is a subset of The Stack and StackOverflow, obtained by using a language model-based classifier (consisting of about 6B tokens). - A synthetic textbook dataset of <1B tokens of GPT-3.5 generated Python textbooks. - A small synthetic exercises dataset of ~180M tokens of Python exercises and solutions. We describe those datasets in more detail in the next subsections. Taken together, the above datasets contain less than 7B tokens. We refer to the combination of filtered code-language and synthetic textbook datasets as “CodeTextbook” and use it in the pretraining phase to obtain our base model phi-1-base—this model already achieves a competitive HumanEval performance of 29%. Then we use the 180M token synthetic exercises dataset, referred to as “CodeExercises”, to finetune our phi-1-base model to obtain phi-1. Despite the small size of the “CodeExercises” dataset, finetuning with this dataset is crucial not only for large improvements in generating simple Python function as shown in Figure 1, but more broadly to unlock many interesting emergent capabilities in our phi-1 model that are not observed in phi-1-base (see Section 3). 2.1 Filtering of Existing Code Datasets Using a Transformer-Based Classifier We begin with publicly available Python code datasets: we use the Python subset of the deduplicated version of The Stack and the StackOverflow, which together contain over 35 million files/samples, totalling over 35B tokens. We annotate the quality of a small subset of these files (about 100k samples) using GPT-4: given a code snippet, the model is prompted to “determine its educational value for a student whose goal is to learn basic coding concepts”. We then use this annotated dataset to train a random forest classifier that predicts the quality of a file/sample using its output embedding from a pretrained codegen model as features. We note that unlike GPT-3.5, which we use extensively to generate synthetic content (discussed below), we use GPT-4 minimally only for annotations on the quality of a small subset of The Stack and StackOverflow samples. We thus view our usage of GPT-4 as merely a way to avoid tedious human-annotation efforts [Dubois et al., 2023]. Our filtering boosts model performance significantly even without the synthetic datasets discussed below: for 350M parameter models trained on unfiltered Stack (deduplicated python) and StackOverflow, the HumanEval performance saturates at 12.19% even after training for 96k steps (200B tokens), while training on the filtered subset achieves 17.68% on HumanEval after 36k steps. We further improve this to 20.12% (reported in Figure 1) by training on a combination of the filtered dataset and the synthetic textbooks dataset discussed below. ### 2.2 Creation of Synthetic Textbook-Quality Datasets One of the main challenges in creating a high-quality dataset for code generation is ensuring that the examples are diverse and non-repetitive. By diversity, we mean that the examples should cover a wide range of coding concepts, skills, and scenarios, and that they should vary in their level of difficulty, complexity, and style. Diversity is important for several reasons: it exposes the language model to different ways of expressing and solving problems in code, it reduces the risk of overfitting or memorizing specific patterns or solutions, and it increases the generalization and robustness of the model to unseen or novel tasks. However, achieving diversity is not trivial, especially when using synthetic data generated by another language model. Simply prompting the model to produce a coding textbook or a set of exercises, even with some variation in the instructions or the parameters, will likely result in a very homogeneous and redundant dataset, where the same concepts and solutions are repeated over and over with minor changes. This is because language models tend to follow the most probable or common paths given their training data and their priors, and they lack the creativity or the incentive to explore alternative or novel ways of generating code. Therefore, one needs to find the right “trick” that will induce the language model to be more creative and diverse in its output, while still maintaining the quality and the coherence of the examples. Inspired by Eldan & Li [2023], where a diverse set of short stories were created by including a random subset of words chosen from some fixed vocabulary in the prompt and requiring that they would be somehow combined in the generated text, we look for ways to inject randomness into the prompt in a way that gives rise to the generation of a diverse dataset. THE SYNTHETIC TEXTBOOK DATASET This dataset consists of less than 1B tokens of GPT-3.5 generated Python textbooks, synthesized to provide a high-quality source of natural language heavy text interleaved with relevant code snippets. We further targeted the content of these textbooks to cover topics that promote reasoning and basic algorithmic skills. Here, diversity is obtained by providing constraints on topics and target audience of the generated textbook. The following is an example text from the synthetic textbook: ``` To begin, let us define singular and nonsingular matrices. A matrix is said to be singular if its determinant is zero. On the other hand, a matrix is said to be nonsingular if its determinant is not zero. Now, let's explore these concepts through examples. Example 1: Consider the matrix \( A = \text{np.array}([[1, 2], [2, 4]]) \). We can check if this matrix is singular or nonsingular using the determinant function. We can define a Python function, `is_singular(A)`, which returns true if the determinant of \( A \) is zero, and false otherwise. ```python import numpy as np def is_singular(A): det = np.linalg.det(A) if det == 0: return True else: return False A = np.array([[1, 2], [2, 4]]) print(is_singular(A)) # True ``` THE CODEEXERCISES DATASET This is a small synthetic exercises dataset consisting of less than 180M tokens of Python exercises and solutions. Each exercise is a docstring of a function that needs to be completed. The goal of this dataset is to align the model to perform function completion tasks based on natural language instructions. This dataset was also generated by GPT-3.5, where the main means of eliciting diversity is by constraining the function names. For this dataset in particular, we conduct explicit decontamination and alternative evaluations in the following sections to ensure that problems similar to those from HumanEval benchmark are not seen during finetuning. Example exercise: ```python def valid_guessing_letters(word: str, guesses: List[str]) -> List[str]: """ Returns a list of valid guessing letters, which are letters that have not been guessed yet and are present in the word. Parameters: word (str): The word to guess. guesses (List[str]): A list of letters that have already been guessed. Returns: List[str]: A list of valid guessing letters. """ valid_letters = [] for letter in word: if letter not in guesses and letter not in valid_letters: valid_letters.append(letter) return valid_letters ``` 3 SPIKES OF MODEL CAPABILITY AFTER FINETUNING ON CODEEXERCISES Figure 1 showed that the largest improvement in HumanEval resulted from finetuning on the small CodeExercises dataset (<200M tokens). CodeExercises consist exclusively of short Python tasks using only basic Python libraries. In this section, we demonstrate that, quite remarkably the model after finetuning also exhibits a substantial improvement in executing tasks that are not featured in the finetuning dataset. This includes managing intricate algorithmic tasks and using external libraries. This suggests that our finetuning process might have helped the model in reorganizing and consolidating the knowledge acquired during pretraining, even if such knowledge is not explicitly present in our CodeExercises dataset. In this section we will focus on qualitatively comparing and contrasting the capabilities of our finetuned model phi-1 and its pretrained base model phi-1-base. 3.1 Finetuning Improves the Model’s Understanding Using a simple Python function that we created ourselves, we observe in Figure 2 that the model shows a much higher level of understanding and compliance with instructions after finetuning. In particular, phi-1-base struggles with the logical relationships in the prompts, while phi-1 can interpret the question and generate the answer correctly. In this example, even our 350M phi-1-small model shows some level of understanding of the problem even though the final solution is wrong. **Prompt:** "*** Alice, Bob and Charles are playing games. Alice first pick a random number from 333 to 678, Bob then pick a number starting from Alice’s number to 888, Charles then pick a number starting from 123 to Bob’s number. Alice gets one point if Alice’s number minus Charles’s number is divisible by the floor of the square root of Bob’s number, otherwise Bob gets one point. Simulate Alice’s and Bob’s points in 30 iterations."*** | phi-1 | phi-1-base | phi-1-small | |-------|------------|-------------| | points = [0, 0] | self.points = 0 | indices = {} | | for i in range(30): | self.alice = 333 | import random | | alice_num = random. | self.bob = 678 | alice_points = 0 | | randint(333, 678) | self.charles = 888 | bob_points = 0 | | bob_num = random. | self.bob_points = 0 | for i in range(30): | | randint(alice_num, 888) | self.alice_points = 0 | alice_num = random.randint | | charles_num = random. | self.iterations = 30 | (333, 678) | | randint(123, bob_num) | self.alice_turn = True | bob_num = random.randint | | if (alice_num - | self.bob_turn = True | (123, bob_num) | | charles_num) % int( | self.bob_turn_points = 0 | if (alice_num - bob_num) % | | bob_num ** 0.5) == 0: | self.alice_turn_points = 0 | math.floor(math.sqrt(bob_num)) == 0: | | points[0] += 1 | _history = [] | alice_points += 1 | | else: | self.bob_turn_points | if (i % 2) == 0: | | points[1] += 1 | _history = [] | bob_points += 1 | | return f"Alice: {points[0]}, Bob: {points[1]}" | ... | print(f"Alice gets {alice_points} points, while Bob gets {bob_points} points.") | Figure 2: Model performance with a multi-step algorithmic prompt, comparing the effects of finetuning and scale. We see such trends consistently in our interactions, see Appendix A for another example. 3.2 Finetuning Improves the Model’s Ability to Use External Libraries We demonstrate here that finetuning on CodeExercises unexpectedly improves the model’s ability to use external libraries such as Pygame, Tkinter, and pytorch, eventhough our exercises do not contain these libraries. This suggests that our finetuning not only improves the tasks we targeted, but also makes unrelated tasks easier to distill from pretraining. As an example, Figure 3 shows a PyGame example that asks the model to generate code to move a ball, where we see that phi-1 shows phenomenal improvement over phi-1-base model. See Appendix A for additional examples. 4 Evaluation on Unconventional Problems with LLM Grading A potential concern with the surprisingly good performance of phi-1 on HumanEval (see Table 1 and Figure 1) is that there might be memorization stemming from contamination of the synthetic CodeExercises dataset. We study this potential contamination directly in Section 5, while this section addresses the concern with a new evaluation that is designed to be unconventional enough to be unlikely to appear in our training data. To minimize bias and leakage, the new evaluation problems were created by a dedicated team that did not access the CodeExercises dataset or the final model. They created 50 new problems in the format as HumanEval with instructions to design problems that are unlikely to appear in real-world code bases or as coding exercises. Here is an example: ```python def sort_concat_square_deduplicate(list1, list2, my_threshold): """This functions takes two lists of integers, sorts each of them in ascending order, concatenates them, squares the entries at even indices, filters out entries smaller than my_threshold and then removes duplicates. The resulting list is returned.""" ``` One of the challenges of evaluating language models on coding tasks is that the output of the model is often binary: either the code passes all the unit tests or it fails. However, this does not capture the nuances of the model’s performance, as it might have produced a code that is almost correct but has a minor error, or a code that is completely wrong but coincidentally passes some tests. Arguably, a more informative way of assessing the model’s coding skills is to compare its output with the correct solution and grade it based on how well it matches the expected logic. This is similar to how humans are evaluated on coding interviews, where the interviewer does not only run the code but also examines the reasoning and the quality of the solution. To evaluate candidate solutions, we therefore adopt the approach of using GPT-4 to grade the solution (such as in Eldan & Li (2023)). This approach has two distinct advantages: (1) by using GPT-4 as a grader, we can leverage its knowledge and generative abilities to obtain a more fine-grained and meaningful signal of the student model’s coding capabilities, and (2) it obviates the need for tests. Our prompt instructs the LLM to evaluate a student’s solution first in a short verbal evaluation followed by grades from 0 to 10. See Table 2 for our results with phi-1 and competing models. The grades on our new unconventional problems give the same ranking as HumanEval (see Table 1). phi-1 again achieves a score significantly higher than StarCoder, as it did on HumanEval. Given that the new problems have had no chance to contaminate the training data and, furthermore, were designed to be outside the training distribution, these results greatly increase our confidence in the validity of phi-1’s performance. --- 3 Developing rigorous sets of tests can be a significant undertaking, as demonstrated by Liu et al. (2023). | Model | Size | Train tokens | Score | HumanEval | |------------------------|--------|--------------|-------|-----------| | CodeGen-Mono-350M | 350M | 577B | 19% | 13% | | CodeGen-Mono-16.1B | 16.1B | 577B | 38% | 29% | | Replit | 2.7B | 525B | 37% | 22% | | StarCoder | 15.5B | 1T | 51% | 34% | | phi-1-base | 1.3B | 7B | 37% | 29% | | phi-1-small | 350M | 7B | 45% | 45% | | phi-1 | 1.3B | 7B | 52% | 51% | Table 2: LLM graded Understanding scores on 50 new unconventional coding problems. 5 DATA PRUNING FOR UNBIASED PERFORMANCE EVALUATION In Figure 1, we see that training on CodeExercises leads to a substantial boost in the performance of the model on the HumanEval benchmark. To investigate this boost, we propose to prune the CodeExercises dataset by removing files that are “similar” to those in HumanEval. This process can be viewed as a “strong form” of data decontamination. We then retrain our model on such pruned data, and still observe strong performance on HumanEval. In particular, even after aggressively pruning more than 40% of the CodeExercises dataset (this even prunes files that are only vaguely similar to HumanEval, see Appendix C), the retrained phi-1 still outperforms StarCoder. We believe that such data pruning experiment is a fair way to evaluate performance, and is more insightful than standard “contamination” studies in the literature that are usually based on measures of overlap between training and test data (e.g., Section 4.8 of Austin et al. (2021)). For sake of completeness we start this section by conducting a standard contamination experiment, which shows that CodeExercises is not contaminated by HumanEval in this standard sense. 5.1 N-GRAM OVERLAP N-gram measures the similarity of text segments based on the shared n-word sequences. We calculate the n-gram overlap between the docstrings of each humaneval question and each exercise in the CodeExercises dataset that was generated. We found 4 humaneval questions with 13-gram overlap with at least one of the entries in our dataset. After further investigating, we found out that all the 4 overlap cases in the 13-gram are all false positives (see examples shown in Appendix C). 5.2 EMBEDDING AND SYNTAX-BASED SIMILARITY ANALYSIS As we just saw, the n-grams are not refined enough to find similar code snippets between HumanEval and CodeExercises. Instead we use a combination of embedding and syntax-based distances. For the embedding distance we compute the L2 distance between the embedding of the code snippets where the embedding is derived from a pre-trained CodeGen-Mono 350M model (Nijkamp et al., 2023b). We observe that the embedding distance is successful in capturing code pairs where the overall code semantics are similar, which can be inferred via the Python Docstring, function/class names, as well as the code structure. For the syntax-based distance we calculate the (string) edit distance between the abstract syntax trees (ASTs) of two given code snippets. The AST distance successfully identifies overlapping sections between code pairs while being agnostic to non-syntax text such as variable/function naming, comments, and Python Docstrings. See Appendix C for examples of code pairs that are captured at various $\tau$ and embedding distances. For our pruning experiments on CodeExercises, we fix a threshold for the embedding distance, and we test several match rate $\tau$ for the AST distance. We vary $\tau$ between 0.95 and 0.8, which corresponds to 4% to 40% of problems in CodeExercises, respectively. Table 3 summarizes the performance of our retrained phi-1 on pruned datasets (with $\tau = 0.95, 0.9, 0.85$ and 0.8) versus the original phi-1 trained on full CodeExercises and the 15.5B-parameter StarCoder-prompted. We divide the HumanEval problems into two subsets (“similar” and “non-similar”) based on whether or not they have at least one close match (for this given $\tau$) inside the original CodeExercises dataset. We then report the accuracy of the models on each subset of HumanEval separately. As one can see, even after heavily pruning our dataset, phi-1 still outperforms StarCoder-Prompted by a large margin, which validates that our performance boost is not due to dataset “contamination”, even when the latter term is understood loosely. | $\tau$ | Problem Count | phi-1 | phi-1 retrained on pruned data | StarCoder-Prompted [Li et al., 2023] | |-------|---------------|-------|-------------------------------|----------------------------------| | | similar | 71 | 81.7% | 74.6% | 57.7% | | 0.95 | non-similar | 93 | 26.9% | 32.3% | 29.0% | | | total | 164 | 50.6% | 50.6% | 41.5% | | | similar | 93 | 63.4% | 51.6% | 48.4% | | 0.9 | non-similar | 71 | 33.8% | 36.6% | 32.4% | | | total | 164 | 50.6% | 45.1% | 41.5% | | | similar | 106 | 62.3% | 52.8% | 47.2% | | 0.85 | non-similar | 58 | 29.3% | 34.5% | 31.0% | | | total | 164 | 50.6% | 46.3% | 41.5% | | | similar | 116 | 59.5% | 52.6% | 45.7% | | 0.8 | non-similar | 48 | 29.2% | 27.1% | 31.2% | | | total | 164 | 50.6% | 45.1% | 41.5% | Table 3: Percentage of similar versus non-similar HumanEval problems correctly solved by different models. Similarity is determined based on whether or not the corresponding HumanEval problem has any close matches inside the CodeExercises dataset (for a given $\tau$). The problem count denotes the number of HumanEval problems within each subset. Here, $\tau$ is the threshold on AST-based match rate between codes for similarity check. 6 CONCLUSION Just as a comprehensive, well-crafted textbook can provide a student with the necessary knowledge to master a new subject, our work demonstrates the remarkable impact of high-quality data in honing a language model’s proficiency in code-generation tasks. By crafting “textbook quality” data we were able to train a model that surpasses almost all open-source models on coding benchmarks such as HumanEval and MBPP despite being 10x smaller in model size and 100x smaller in dataset size. We hypothesize that such high quality data dramatically improves the learning efficiency of language models for code as they provide clear, self-contained, instructive, and balanced examples. There remains a number of limitations of our model compared to larger models for code. Firstly, phi-1 is specialized in Python coding, which restricts its versatility compared to multi-language models. Secondly, phi-1 lacks the domain-specific knowledge of larger models such as programming with specific APIs or using less common packages. Lastly, due to the structured nature of the datasets and the lack of diversity in terms of language and style, phi-1 is less robust to stylistic variations or errors in the prompt (for instance, its performance substantially degrades with grammatical mistakes in the prompt). We expand on these limitations and other failure modes of phi-1 in Appendix B. None of these limitations seem fundamental, and with more work our approach could be used to tackle each one of them, although it is unclear what scaling might be necessary to overcome them (both for the model size and the dataset size). We also believe that significant gains could be achieved by using GPT-4 to generate the synthetic data instead of GPT-3.5, as we noticed that GPT-3.5 data has a high error rate. It is interesting that phi-1 is able to achieve such high coding proficiency despite those errors (a similar phenomenon was observed in Allen-Zhu & Li (2023) where a language model can be trained on data with 100% error rate and still generate correct answers at test time). More generally, our work provides evidence that developing good methodology for creating high-quality datasets is a central direction of research for advancing natural language processing and related fields (see also Jung et al. (2023) for further evidence). However, creating high-quality datasets is not a trivial task, and it poses several challenges that need to be addressed. One challenge is to ensure that the dataset covers all the relevant content and concepts that one wants the model to learn, and that it does so in a balanced and representative way. Another challenge is to ensure that the dataset is truly diverse and non-repetitive, so that the model does not simply overfit to the data or memorize specific patterns or solutions. This requires finding ways to inject randomness and creativity into the data generation process, while still maintaining the quality and the coherence of the examples. Moreover, even after creating such datasets, we lack a good methodology to measure and evaluate the amount of diversity and redundancy in the data. For example, if we have a dataset with coding exercises, it is hard to determine how many different variations of each exercise exist, and how they are distributed across the dataset. Finally, as language models themselves will be used to curate data for future language models, it further increases the urgency on the ethical and social implications of training such models, such as the accountability, the transparency, and the bias of the data and the models that are involved in this process. REFERENCES Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muenninghoff, Mayank Mishra, Alex Gu, Manan Dey, et al. Santacoder: don’t reach for the stars! arXiv preprint arXiv:2301.03988, 2023. Zeyuan Allen-Zhu and Yuanzhi Li. Physics of language models: Part 1, context-free grammar. arXiv preprint arXiv:2305.13673, 2023. Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021. Mohammad Bavarian, Heewoo Jun, Nikolas Tezak, John Schulman, Christine McLeavey, Jerry Tworek, and Mark Chen. Efficient training of language models to fill in the middle. arXiv preprint arXiv:2207.14255, 2022. Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610–623, 2021. Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel Weinbach. GPT-NeoX-20B: An open-source autoregressive language model. In Proceedings of the ACL Workshop on Challenges & Perspectives in Creating Large Language Models, 2022. URL https://arxiv.org/abs/2204.06745. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pp. 1877–1901, 2020. Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. Flashattention: Fast and memory-efficient exact attention with io-awareness. Advances in Neural Information Processing Systems, 35:16344–16359, 2022. Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Alpacafarm: A simulation framework for methods that learn from human feedback. arXiv preprint arXiv:2305.14387, 2023. Ronen Eldan and Yuanzhi Li. Tinystories: How small can language models be and still speak coherent english? arXiv preprint arXiv:2305.07759, 2023.
WRxCuhTMB2
The paper has an entire section dedicated to training epistemic injections, but this is never mentioned before as a part of their contribution or utility. The beginning of the paper makes it seem like they offer an evaluation of modern methods, but this along with the meta-model seem to be proposing training procedures?
EVALUATION METHODOLOGY FOR DISENTANGLED UNCERTAINTY QUANTIFICATION ON REGRESSION MODELS Anonymous authors Paper under double-blind review ABSTRACT The lack of an acceptable confidence level associated with the predictions of Machine Learning (ML) models may inhibit their deployment and usage. A practical way to avoid this drawback is to enhance these predictions with trustworthiness and risk-aware add-ons such as Uncertainty Quantification (UQ). Typically, the quantified uncertainty mainly captures two intertwined parts: an epistemic uncertainty component linked to a lack of observed data and an aleatoric uncertainty component due to irreducible variability. Several existing UQ paradigms aim to disentangle the total quantified uncertainty into these two parts, with the aim of distinguishing model irrelevance from high uncertainty-level decisions. However, few of them are delving deeper into evaluating the disentanglement result, even less on real-world data. In this paper, we propose and implement a methodology to assess the effectiveness of uncertainty disentanglement through benchmarking of various UQ approaches. We introduce some indicators that allow us to robustly assess the decomposition feasibility in the absence of ground truth. The evaluation is done using an epistemic variability injection mechanism on four state-of-the-art UQ approaches based on ML models, on both synthetic and real-world gas demand datasets. The obtained results show the effectiveness of the proposed methodology for better understanding and selection of the relevant UQ approach. The corresponding code and data can be found in Github repository. 1 INTRODUCTION AND RELATED WORK Introduction Complex systems (such as factories[16], transport[27], or electricity networks[5]) are now equipped with multiple sensors allowing subsystems characterization and global operational monitoring. To process massive amounts of dynamic data, the deployment of AI-based monitoring models becomes mandatory, raising a critical question: can we trust them? This work takes place in the context of trustworthy AI and risk-aware Machine Learning (ML) predictions based on Uncertainty Quantification (UQ). In this respect, AI-based systems must be enhanced by an uncertainty management framework[2][12], paving the way for their certification through risk-based decision-making[9]. In the field of ML-UQ, several paradigms claim to produce models able to separate and quantify two distinct components contributing to the total uncertainty[3][18][10][8]. Epistemic uncertainty expresses the irrelevance of a model facing an atypical input and aleatoric uncertainty expresses irreducible intrinsic variability in a model decision[15]. These two components are defined for and in a given modeling scope, resulting from the methodological choice of both observed features and model type, which depends on the predictive task’s characteristics. Setting up this modeling scope draws the line between upstream sources of irreducible uncertainty (e.g. stemming from the studied phenomena, measurement imprecision, unobserved hidden variables, or limiting model hypothesis) and reducible sources of uncer- tainty (e.g., out-of-distribution data or lack of observations). According to a common view in the ML-UQ community [15, 18, 8], disentangled Uncertainty Quantification (dUQ) approaches are often composed of two parts. The first is an explicit or implicit ensemble of submodels, each one providing a prediction and an aleatoric estimation. The second is a metamodel synthesizing the ensemble outputs and producing an epistemic confidence level based on the variability of ensemble decisions. To highlight the meaning of this distinction, we asked the following question: "for a given modeling scope, can we reduce the variability of predictions by enhancing the quantity and/or quality of observed data?" However, dUQ faces both technical and methodological difficulties. On real data with noisy and limited observations, the epistemic and aleatoric uncertainty components are strongly entangled [15, 9]. Plus, there is no ground truth that allows the evaluation of the quantification and even less the possibility of the decomposition. In this regard, our contribution addresses the following methodological challenges, related to robust evaluation of model epistemic confidence, despite the absence of ground truth in real data. We will first compare recent works on UQ regression in ML, then propose a dUQ evaluation methodology based on a novel epistemic variability injection mechanism, aiming to exhibit or disprove the effectiveness of aleatoric and epistemic uncertainty disentanglement. **Related Works** Multiple survey papers are also dedicated to UQ in ML [2, 12] and focus on three main UQ paradigms. The Bayesian formalism [6, 28] is widely used to develop probabilistic methods for UQ. As the exact Bayesian inference is intractable, multiple approximations are proposed in literature. Monte Carlo Dropout (MCDP) is one of recent attempts to estimate the uncertainty of forecast using Dropout in neural networks [11, 28]. The ensemble models [17] are widely used for uncertainty estimation due to their simple implementation. The uncertainty could be measured through the prediction confidence of the ensemble members. The well-known ensemble approach, namely, Random Forests (RF) can be used for the estimation of uncertainty indicators based on the total variance theorem [19, 26]. Finally, Evidential Deep Learning (EDL) [25] learns a distribution over the parametric space of model outcomes and collects evidence regarding the model predictions. Recently, the evidential formalism has been adapted for regression problems in [3] (see section B of Appendix). Then, we propose to summarize the recent literature through three main categories of characteristics: features, problem support, and environment configuration (see Table 1). Each of these categories has its own set of criteria, allowing a better analysis of each approach. The last line has been added to illustrate our contribution through the proposed dUQ evaluation, and the benchmark carried out on it. **Table 1: Summary of UQ approaches and the characteristics covered by proposed benchmark framework.** | Methods | UQ paradigm | Prior | UQ Decomposition | Regression | Classification | Dataset | Evaluation criteria | Interpretation | Baselines | |------------------|-------------|-------|-----------------|------------|----------------|---------|---------------------|---------------|----------| | MCDP | Drop-out | No | No | Yes | Yes | Public / Synthetic (diverse) | NLL / RMSE | Local | Yes (diverse) | | BNN-IV | Bayesian | Yes | Yes | Yes | No | Public / Synthetic (diverse) | NLL | Local | Variants | | DeepSTUQ | Variational | Yes | Yes | Yes | No | Public | NLL/Coverage | Partial | Yes (diverse) | | DeepEnsemble | Ensemble | No | No | Yes | Yes | Public / Synthetic (stationary) | Sharpness NLL / RMSE | Qualitative | Yes (diverse) | | AutoBEUQ | Ensemble | No | Yes | Yes | No | Public / Synthetic (stationary) | NLL/RMSE | No | Yes (diverse) | | PI | Ensemble | No | No | Yes | No | Public / Synthetic (stationary) | Coverage | No | Variants | | Rahman | Covariance | Yes | Yes | Yes | No | Public / Synthetic (Time series) | Relative Error | No | Variants | | EDL | Evidential | Yes | No | Yes | No | Public / Synthetic (stationary) | NLL/RMSE | Partial | Yes (diverse) | Color codes (green: satisfying, orange: partial, red: ignorance) From Table 1, we can observe the existence of a diverse range of paradigms (from Drop-out to Ensemble and Evidential), each having its own characteristics. Some consider the distinction between two sources of uncertainty in their modeling, whereas others output a unique quantity presenting the total uncertainty. The regression problem is mostly addressed, whereas the adaptation to classification is straightforward. Furthermore, most approaches are evaluated on synthetic or small public datasets with some limitations in the evaluation methodology. Firstly, those datasets do not reflect the underlying complexity of real-world data due to inherent noises from heterogeneous sources (human, inherent, missing knowledge, etc.). Additionally, there is a lack of interpretation and behavioral analysis of uncertainty components during the evaluation (in particular regarding the epistemic part), which often focuses on a global Negative Log Likelihood (NLL) metric. The influence and appropriateness of evaluation metrics for UQ models are also studied in the literature [13,4,20], based on the use of reliability diagrams (or reliability based on the local fit and density principles) and calibrated measures for comparison of various approaches. However, these uncertainty metrics are not compared through various UQ paradigms. In this article, we seek to bridge this gap by introducing a dUQ evaluation methodology providing a comprehensive benchmark of various typologies of UQ models. To this end, we apply our methodology on four standard ML approaches which are Monte Carlo Dropout (MCDP), Deep Ensemble, Random Forest disentangled Uncertainty Quantification (RF-dUQ), EDL, and using real and synthetic datasets. 2 Disentangled Uncertainty Quantification Modeling Framework Before presenting how dUQ approaches work, we provide technical details concerning uncertainty sources in the context of a regression task, and how a model captures these as aleatoric or epistemic. Let us consider a modeling framework, in which random variables (denoted \( \varepsilon \)) are linked to specific uncertainty sources. Here, time series forecasting (link to the type of data on which the work was undertaken) is treated as a regression problem based on time-dependent features. In this context, a model \( \hat{f} \) aims to predict the nominal behavior for variables of interest represented by univariate/multivariate time series \( Y = (y_1, \ldots, y_t, \ldots, y_T) \). The forecast at time step \( t \) for the variable \( y_t \) will be based on a vector of observed variables \( x_t \) composed of both exogenous \( c_t \) and lagged response \( Y_{t-1}^{past} = (y_{t-1}, \ldots, y_{t-1}) \) variables, as well as some latent variables \( h_t \): \[ y_t = f(x_t) + \varepsilon_u^t \quad \text{with} \quad \varepsilon_u^t \sim \mathcal{N}(0, \sigma_u^t(x_t, h_t)) ; \quad x_t = [c_t, Y_{t-1}^{past}], \] with \( f(x_t) \) the average explainable signal, and \( \varepsilon_u^t \) a time-dependent Gaussian noise (local homogeneity assumption). The latter is associated with upstream irreducible variability, encompassing both intrinsic, measurement noises and premodeling noise arising from limits of the modeling scope (e.g. due to the influence of hidden variables \( h_t \) that cannot be captured through lagged temporal variables \( Y_{t-1}^{past} \)). The ML model \( \hat{f}_\theta \) aims to approximate the explainable part \( f \) of the target \( y \) using observed variables \( x \) from a training set \( D_\theta = (x_1, y_1), \ldots, (x_n, y_n) \), a subset of the dataset \( \mathcal{D} \). \( \theta \) is the set of parameters obtained using the training set \( D_\theta \), over \( \Theta \) indicating the whole set of parameters linked to all subsets of the dataset \( \mathcal{D} \). According to the bias-variance trade-off (Eq[1]), we decompose all error sources between \( y_t \) and \( \hat{f}_\theta(x_t) \): \[ E_\Theta[(y_t - \hat{f}_\theta(x_t))^2] = E_\Theta[\hat{f}_\theta(x_t) - f(x_t)]^2 + E_\Theta[(E_\Theta[\hat{f}_\theta(x_t)] - \hat{f}_\theta(x_t))]^2 + E_y[(y_t - f(x_t))^2] \] \[ = (\underbrace{f^*_\Theta(x_t) - f(x_t)}_{\text{Bias}})^2 + \underbrace{E_\Theta[(f^*_\Theta(x_t) - \hat{f}_\theta(x_t))^2]}_{\text{Variance}} + \underbrace{\sigma_u^t}_{\text{Intrinsic variability}}, \] (1) with \( f^*_\Theta = E_\Theta[\hat{f}_\theta(x_t)] \), the average function given the distribution \( \Theta \). Among the three above-mentioned error sources, the variance can be explained by a noise \( \varepsilon_\theta^t \) which corresponds to the gap between the average function over \( \Theta \) and the ML model: \( \varepsilon_\theta^t = f^*_\Theta(x_t) - \hat{f}_\theta(x_t) \). This epistemic noise is related to insufficient observations and could be reduced by gathering more data. The bias requires another random variable \( \varepsilon_\theta^t \) linked to the gap between the average explainable signal and the average function over \( \Theta \): \( \varepsilon^\Theta_t = f(x_t) - f^\ast_\Theta(x_t) \). This noise, due to the modeling constraint over \( \Theta \), is irreducible in the modeling scope. Finally, the intrinsic variability is related to the irreducible noise \( \varepsilon^\mu_t \) that appears upstream of the modeling scope. It quantifies a lower bound for the expected error in the test data with both infinite data and unconstrained modeling. To show the relation between the introduced random variables and the epistemic/aleatoric concepts, we inject them into the total uncertainty law [8] in Eq[2]. After simplification allowed by strong independence and zeros-mean assumptions: \[ y_t = f_\theta(x_t) + \varepsilon^\theta_t + \varepsilon^\Theta_t + \varepsilon^\mu_t \quad \text{and} \quad \sigma(y_t|x_t;\theta) = \sigma_\theta \left[ E_y(y_t|x_t;\theta) \right] + E_\Theta \left[ \sigma_y(y_t|x_t;\theta) \right] = \sigma^E_t + \sigma^A_t \] We obtain \[ \sigma^E_t = \sigma_\theta \left[ \hat{f}_\theta(x_t) \right] = E_\Theta \left[ (\varepsilon^\theta_t)^2 \right], \quad \sigma^A_t = \sigma_y(\varepsilon^\mu_t) + \sigma_y(\varepsilon^\Theta_t) \] (2) From these equations, we can see that the decomposition into epistemic and aleatoric components (denoted by \( E \) and \( A \) superscripts) requires the manipulation of the whole set of parameters \( \Theta \). As expected, the epistemic part is essentially made up of the variance error caused by the sampling of the training set. However, the aleatoric part contains several quantities that are all irreducible in the modeling scope but may be associated with different sources: upstream modeling scope (intrinsic, measurement, and pre-modeling noise), and model constraints which also cause bias. When we move slightly outside the domain of validity of the assumptions (due to limited training data and approximate manipulation of \( \Theta \)), the previous negligible terms can then induce blurs into the uncertainty decomposition. ![Figure 1: Illustration of a metamodel using Gaussian Aleatoric and Epistemic assumptions.](image1) **Proposed unified dUQ framework:** The functional scheme of the proposed dUQ framework incorporating various UQ paradigms is shown in Fig. 1. It is based on a metamodel \( M^\Theta \) that learns and manipulates diverse submodels \( \hat{f}_\theta \) to combine their inferences. The learning phase aims to capture the explainable variability and estimate irreducible variability while exploring a diversity of submodel candidates \( \Theta \). To ensure diversity and avoid submodel redundancy, a variability infusion mechanism (depending on the UQ paradigm) is needed during the learning phase. The estimated submodels produce, at the inference step, a local regressor \( \hat{f}_\theta(x_t) \) and an estimation of aleatoric variability \( \hat{\sigma}^a(\hat{f}_\theta(x_t)) \). Furthermore, an epistemic variability \( \hat{\sigma}^e_t \) is produced by computing the variability of the submodel regression \( \hat{y}_t \) (using for example a Gaussian... Finally, the metamodel provides a *risk-aware forecast* comprising three indicators: $\mu_t, \sigma^a_t, \sigma^e_t$ expressing forecast, aleatoric and epistemic indicators. As can be seen in Fig. 2, these indicators correspond to three independent axes on how the model perceives the data regarding forecast and sources of uncertainty. We can use them to design confidence intervals, error margins, or warnings highlighting a lack of model confidence. 3 PROPOSED DISENTANGLED UQ EVALUATION METHODOLOGY Our dUQ evaluation methodology aims to perform a robust evaluation of model epistemic confidence, despite the absence of ground truth in real datasets. As an evaluation criterion, to consider dUQ limit due to modeling approximation, we propose the *disentangled Epistemic indicator* (dE-Ind) corresponding to a Negative Epistemic Ratio under Total Log-Likelihoods. It is computed through Aleatoric and Epistemic Gaussian assumptions from dUQ model output $(\mu_t, \sigma^a_t, \sigma^e_t)$ as $$t^e_t = -\ln(1 + \frac{\sigma^a_t}{\sigma^e_t}) \quad (\text{dE-Ind})$$ The experimental goal is to highlight an epistemic confidence gap in model predictions, between nominal and altered queries (i.e. affected by injections of *epistemic uncertainty*). Epistemic variability injections are designed to force the metamodel to extrapolate predictions on altered queries, corresponding to potentially unseen or even inconsistent feature space locations. In this latter case, the temporal correlation between features holding complex dependence on each other is potentially broken [14] and corresponds to out-of-distribution data. A distinction can then be made between altered queries close to the training domain boundary (almost-normal instance) and altered queries far outside the training domain boundary (abnormal instance). In what follows, we describe our methodology, which consists of two types of epistemic variability injections (Fig. 3), one at the *inference step* and another at the *training step* leading to different levels of experimental complexity and realism. ![Figure 3: Scheme of the proposed dUQ evaluation methodology.](image) **Inference step injection** uses a data replacement to form a kind of robustness attack. To produce altered queries, the most important features are identified (according to the SHAP and SAGE libraries [7]) and their values are replaced by outliers belonging to distribution tails. Data replacements are done using quantile feature distributions on the whole dataset (global) or on a subset of data (local). The number and type of replacements determine the characteristics of the variability injection. Hence, computing a pre-trained metamodel $M^d$ predictions on nominal ($X_n$) and altered ($X_a$) queries allow us to statistically quantify the epistemic confidence gap due to performing inference on naive synthetic outliers. Training step injection uses a data withdrawal approach. In addition to the pre-trained metamodel (called control), a second instance of the same metamodel (called degraded) is trained on a slightly modified dataset: a selected subset (of neighboring data in the feature space) is ablated from training data (called altered subset) by a large portion (98% or 100%). Therefore, a distinction is made between nominal queries $X_n$ (which belong to unaltered subsets) and altered queries $X_a$. Here, the epistemic confidence gap is quantified by comparing the control ($M^c$) and degraded ($M^d$) model predictions on the test parts of the nominal and altered queries. As it is a more complex setup generating more realistic outliers, we designed a robust methodology based on statistical tests. These tests are corrected by a control mechanism accounting for both the shift between control and degraded models (due to divergence during learning) and the original shift between subsets (due to training set heterogeneity). Both experimental setups aim to investigate whether the injection of epistemic variability induces a significant shift between the distributions of the dE-Indicator $\mathcal{D}_I$ predicted by a metamodel for nominal queries $F(x_n) \sim \mathcal{D}_{In}$ and for altered queries $F(x_a) \sim \mathcal{D}_{Ia}$. For the training injection setup, two statistical tests allow quantifying the significance of the dE-Indicator distribution shift with two distinct control measures. The first test, Model deviation due to injection corrected by the control model deviation (test 1b in Fig. 3), highlights an epistemic confidence gap between the control and the degraded model for the altered subset, with a more substantial magnitude than for the nominal subset. The second test, Sample shift due to injection corrected by the control subset shift (test 2) highlights an epistemic confidence gap between the altered and nominal subsets for the degraded model, with a stronger magnitude than the original gap for the control model. Our statistical framework (see Section D of Appendix) is based on Wilcoxon-Mann-Whitney and Wilcoxon signed rank tests, with the hypothesis $H_0$ that the two distributions are identical and the alternative hypothesis that $\mathcal{D}_{Ia}$ stochastically dominates $\mathcal{D}_{In}$. 4 EXPERIMENTAL SETTINGS AND RESULTS The benchmark aims to compare the performances of four metamodel-based approaches from different UQ paradigms applied to univariate time series: Random Forest disentangled Uncertainty Quantification (RF-dUQ) [26], Probabilistic Neural Network Monte Carlo Dropout (PNN-MCDP) [11], Probabilistic Neural Network Deep Ensemble (PNN-DE) [17] and Evidential Deep Learning regression (EDL) [3]. Implementations and datasets are available in Github repository. The developments are performed using standard ML libraries (Scikit-Learn [22] and Tensorflow [1]) on CPUs (see Section E of Appendix). Table 2: Test set performance of nominal setup using our two forecasting datasets. | Approach | MLP | RF-dUQ | PNN-MCD | PNN-DE | EDL | RF-dUQ | PNN-MCD | PNN-DE | EDL | |----------|-----|--------|---------|--------|-----|--------|---------|--------|-----| | Dataset | RMSE metrics (lower is better) | NLL metrics* (lower is better) | | real | 0.22±0.02 | 0.23±0.02 | 0.22±0.02 | 0.22±0.02 | 0.22±0.01 | -0.51±0.06 | -0.53±0.08 | -0.57±0.07 | -0.55±0.08 | | synthetic | 0.43±0.01 | 0.43±0.01 | 0.44±0.01 | 0.43±0.01 | 0.44±0.01 | 0.43±0.01 | 0.46±0.01 | 0.40±0.02 | 0.44±0.01 | | Dataset | Sharpness* | Coverage (Target: 95.65%) | | real | Ø* | 0.82±0.01 | 0.81±0.02 | 0.73±0.01 | 0.75±0.02 | 94.9±0.8 | 94.9±1.3 | 95.1±1.4 | 94.4±1.7 | | synthetic | Ø* | 1.78±0.01 | 1.86±0.05 | 1.56±0.03 | 1.80±0.03 | 96.7±0.1 | 96.3±0.1 | 95.0±0.01 | 96.5±0.2 | *NLL. Coverage and sharpness is meaningless for the MLP model. Firstly, we compare performances of the mentioned methods in terms of regression accuracy (Root-Mean-Square Errors, RMSE) and UQ relevance (Negative Log Likelihood, NLL) with six literature approaches over four public datasets [1] (see Table 2). We observe good overall performances of all methods, with slightly better results obtained by Autodeuq [10] (improved PNN-DE using AutoML). Hereafter, we perform our dUQ evaluation on two new forecasting datasets. A real dataset related to gas demand prediction and a synthetic one based on local time-dependent Gaussian distribution sup- --- 1 More information about methodology, and additional results are provided in Section E of Appendix. The evaluation process takes place in a standard ML framework with a sequential cross-validation scheme (2-folds and 2 repetitions) to ensure the robustness of the results. It is divided into 3 steps: 1. **UQ regression evaluation on two datasets** We evaluated our four approaches on a nominal setup (without any data alteration) to ensure comparable accuracy and calibrated variance. It is done globally and locally (on several homogeneous subsets) to better acknowledge the data heterogeneity effects. We consider two additional metrics to evaluate the relevance of UQ: Sharpness and Coverage (i.e. size and % of data in the confidence interval respectively). Table 2 shows the competitive performance in terms of regression accuracy obtained by UQ-based approaches compared to a simple Multi-Layer Perceptron (MLP) model without UQ. Each approach obtains a coverage close to the theoretical one, although PNN-DE seems to provide narrower confidence intervals for similar coverage. Table 3: Comparison of UQ regression performances using RMSE and NLL metrics on public datasets. | Approach | PB | MC Dropout | Deep Ens | hyper Ens | DF Ens | A-deuq | RF-dUQ | PNN-MCDP | PNN-DE | EDL | |----------|----|------------|---------|-----------|-------|--------|--------|----------|--------|-----| | Dataset | | | | | | | | | | | | Kin8nm* | 0.1| 0.1 | 0.09 | 0.26±0.0 | 0.09 | 0.06±0.0| 0.142±0.0| 0.069±0.0| 0.067±0.0| 0.068±0.0| | powerplant* | 4.12| 4.02 | 4.11 | 4.38±0.02| 4.10 | 3.43±0.08| 3.69±0.13| 3.75±0.12| 3.44±0.12| 3.56±0.15| | protein* | 4.73| 4.36 | 4.71 | 5.09±0.01| 4.98 | 3.52±0.02| 3.60±0.03| 3.77±0.08| 3.48±0.08| 3.57±0.05| | yearprediction** | 8.88| 8.85 | 8.89 | 16.84±0.08| 9.30 | 7.91±0.04| 9.25 | 8.75 | 8.71±0.0 | 8.9 | | Dataset | | | | | | | | | | | | Kin8nm* | -0.9| -0.95 | -1.2 | 6.89±2.85| -1.14 | -1.40±0.01| -0.538±0.02| -1.293±0.02| -1.33±0.01| -1.303±0.02| | powerplant* | 2.84| 2.8 | 2.79 | 5.24±0.72| 2.83 | 2.66±0.05| 2.69±0.01| 2.64±0.01| 2.55±0.02| 2.55±0.02| | protein* | 2.97| 2.89 | 2.83 | 21.12±2.52| 3.12 | 2.46±0.03| 2.50±0.01| 2.35±0.05| 2.06±0.06| 3.23±0.10| | yearprediction** | 3.6| 3.59 | 3.35 | 7.44±0.08| 3.58 | 3.22±0.00| 3.64 | 3.31 | 3.22 | 3.30 | *Cross-validation with 5-fold **No cross validation due to the size of dataset 2. **Detailed dUQ evaluation on a training injection experiment on real data** We propose an experiment based on three subsets of the real dataset sharing homogeneous characteristics in terms of their variance (see Appendix E for details): low-variability subset, mid-variability subset and high-variability subset. We present the detailed results in Fig. 4 for a training variability injection with the withdrawal of 98% of the mid-var subset. For each approach, performances of the control and degraded models (denoted by c and d respectively) are shown for each subset. Control meta-models of each of the four approaches display similar behaviors through all the metrics and subsets. As expected, models make more errors on the high-var subset and their predictions are less confident (higher NLL), but still offer a satisfying coverage thanks to the local uncertainty estimation. Figure 4: Local performances for one learning injection setup on real data. Control and degraded models are denoted by c and d. Data are partitioned in three subsets of Low, Mid and High-variability. By comparing control and degraded models, we observe their equivalent performances on nominal subsets (proof of injection locality). On the contrary, for the altered subset, the injection of variability leads to a loss of accuracy (arrows 1) and an increase in NLL (arrows 2), reflecting a loss of confidence between the degraded and control models. We observe a decrease in coverage (arrows 3) with a slight increase of sharpness for PNN-MCDP, PNN-DE, and EDL, meaning that the altered subset deviates from the nominal distribution. However, for RF-dUQ (arrows 4), we observe instead a sharpness increase. The dUQ evaluation is performed through aleatoric/epistemic sharpness and the dE-Indicator (Fig. 4). Again, the control and degraded models show equivalent performances on nominal subsets. However, for the altered subset, all the approaches (except EDL) display a significant increase in epistemic sharpness (arrows 5) while there are only slight variations in aleatoric sharpness. The dE-Indicator logically increases (arrows 6), expressing a loss of epistemic confidence due to epistemic uncertainty injection. However, EDL shows no dE-Indicator increase, suggesting dUQ ineffectiveness in this case. Finally, we represent the degraded PNN-DE model outputs in the UQ indicator space (introduced in Fig. 2), where each sample is a point whose coordinates are its three predictive indicators. On the left plot (colored by subset), green points (corresponding to the mid-var altered subset) are positioned higher on the epistemic axis, expressing the degraded model's lack of confidence at the inference step. The right plot (colored by epistemic confidence difference $\Delta dE$ between control and degraded models) proves that contrary to the degraded model, the control model doesn't express any lack of confidence in the mid-var subset. Indeed, the mid-var region involves higher $\Delta dE$ values, showing the fact that the observed lack of epistemic confidence is caused by mid-var data withdrawal. Figure 5: UQ space for PNN-DE degraded model on real data with mid-var data as altered subset, colored by variability-subset (left) and confidence gap $\Delta dE$ (right). 3. Synthesis of evaluation of dUQ effectiveness on all experiments In total, including cross-validations, 64 variants of experiments were performed using both injection setups on the real and synthetic data. Using the statistical framework based on dE-Ind distribution shifts, the objective is to determine whether epistemic injections may affect the epistemic component and whether their impact is significant on the aleatoric component. For inference step injection experiments (test T1a of Fig. 6), all the approaches successfully expressed the lack of epistemic confidence (large margin above the red dotted line) presence of naive outliers. We observe the small impact of the injection strength for all approaches, whereas the type of injection (local vs. global) does not seem to have a significant impact. For training injection experiments, where a positive result must be observed for both tests (T1b & T2 in Fig. 6) to prove dUQ effectiveness, the results are more contrasted between the approaches. PNN-DE and PNN-MCDP show successful results in almost all configurations. RF-dUQ fails on the high-var setup. EDL fails in almost all configurations, illustrating that dUQ is not effective, either due to the intrinsic behavior of the approach or to parameterization issues in spite of hyperparameter optimization. The perturbation of the low-variability subset (low-var-98 and low-var-100) leads to small test scores for all approaches, suggesting difficulties in expressing low confidence in small variability data, even with few observations. However, some approaches (e.g. PNN-MCDP) still manage to express a loss of confidence even on low-magnitude injection. We also note that the training injection strength (98% vs. 100% removal) does not have a significant impact on dUQ effectiveness. A potential explanation relies on the fact the withdrawn samples have non-removed neighbors that retain part of the supporting information for prediction. ![Test T1a: Subset shift due to inference variability injection](image) ![Test T1b: Subset shift due to training variability injection](image) ![Test T2: Model deviation due to training variability injection](image) Figure 6: Results of the statistical tests for all experiments. 5 CONCLUSION AND PERSPECTIVES We propose a dUQ evaluation methodology, based on epistemic injection at training or inference. These two mechanisms are designed to face methodological issues concerning the assessment of epistemic confidence without ground truth on real data. Experiments, performed using four state-of-the-art models and two datasets, demonstrate dUQ relevance and effectiveness on heterogeneous and heteroscedastic data. We show that some models (RF-dUQ, PNN-MCDP, PNN-DE) produce relevant local aleatoric and epistemic indicators on both datasets and succeed in handling naive altered queries. In contrast, others (EDL) show limitations when dealing with trickier outliers, resulting in ineffective dUQ. Limitation and perspectives The current study only considers the regression task on time series, with Gaussian assumption of aleatoric and epistemic uncertainties. However, the extension to classification and other types of data is straightforward and within our perspectives. Future works will consider more complex and massive datasets issued from dynamic systems. Moreover, we aim to include more complex architectures (e.g. LSTM and Transformers) in our framework, go beyond the Gaussian assumption of UQ formalism, and compare more UQ paradigms using our framework (e.g., Bayesian and variational). Broader impact To implement trustworthy AI in operational conditions, the risk-aware UQ framework will be a crucial part of a reliable chain combining control and certification mechanisms. It could be used along with data-qualification frameworks to ensure dataset viability and meet operational needs, such as complex systems monitoring, or anomaly and distribution drift detection. REFERENCES [1] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: a system for large-scale machine learning. In OsdI, volume 16, pp. 265–283. Savannah, GA, USA, 2016. [2] Moloud Abdar, Farhad Pourpanah, Sadiq Hussain, Dana Rezazadegan, Li Liu, Mohammad Ghavamzadeh, Paul Fieguth, Xiaochun Cao, Abbas Khosravi, U Rajendra Acharya, et al. A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Information Fusion, 2021. [3] Alexander Amini, Wilko Schwarting, Ava Soleimany, and Daniela Rus. Deep evidential regression. Advances in Neural Information Processing Systems, 33:14927–14937, 2020. [4] Arsenii Ashukha, Alexander Lyzhov, Dmitry Molchanov, and Dmitry Vetrov. Pitfalls of in-domain uncertainty estimation and ensembling in deep learning. arXiv preprint arXiv:2002.06470, 2020. [5] Sheraz Aslam, Herodotos Herodotou, Syed Muhammad Mohsin, Nadeem Javaid, Nouman Ashraf, and Shahzad Aslam. A survey on deep learning methods for power load and renewable energy forecasting in smart microgrids. Renewable and Sustainable Energy Reviews, 144:110992, 2021. [6] Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural network. In International conference on machine learning, pp. 1613–1622. PMLR, 2015. [7] Ian C. Covert, Scott Lundberg, and Su-In Lee. Explaining by removing: a unified framework for model explanation. The Journal of Machine Learning Research, 22(1):209:9477–209:9566, January 2021. ISSN 1532-4435. [8] Stefan Depeweg, Jose-Miguel Hernandez-Lobato, Finale Doshi-Velez, and Steffen Udlnuft. Decomposition of uncertainty in bayesian deep learning for efficient and risk-sensitive learning. In International Conference on Machine Learning, pp. 1184–1193. PMLR, 2018. [9] Armen Der Kiureghian and Ove Ditlevsen. Aleatory or epistemic? does it matter? Structural safety, 31(2):105–112, 2009. [10] Romain Egele, Romit Maulik, Krishnan Raghavan, Bethany Lusch, Isabelle Guyon, and Prasanna Balaprakash. Autodeuq: Automated deep ensemble with uncertainty quantification. In 2022 26th International Conference on Pattern Recognition (ICPR), pp. 1908–1914. IEEE, 2022. [11] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pp. 1050–1059. PMLR, 2016. [12] Jakob Gawlikowski, Cedrique Rovile Njieutcheu Tassi, Mohsin Ali, Jongseok Lee, Matthias Humt, Jianxiang Feng, Anna Kruspe, Rudolph Triebel, Peter Jung, Ribana Roscher, et al. A survey of uncertainty in deep neural networks. arXiv preprint arXiv:2107.03342, 2021. [13] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In International conference on machine learning, pp. 1321–1330. PMLR, 2017. [14] Giles Hooker, Lucas Mentch, and Siyu Zhou. Unrestricted permutation forces extrapolation: variable importance requires at least one more model, or there is no free variable importance. Statistics and Computing, 31(6):82, October 2021. ISSN 1573-1375. doi: 10.1007/s11222-021-10057-z. URL https://doi.org/10.1007/s11222-021-10057-z.
SzV37yefM4
If contrastive decoding improves overall generation quality, it should ideally exhibit some improvement in the results without the presence of CoT in the prompts. Do you have any insights on why this is not happening?
CONTRASTIVE DECODING IMPROVES REASONING IN LARGE LANGUAGE MODELS Anonymous authors Paper under double-blind review ABSTRACT We demonstrate that Contrastive Decoding – a simple, computationally light, and training-free text generation method proposed by Li et al 2022 – achieves large out-of-the-box improvements over greedy decoding on a variety of reasoning tasks. Originally shown to improve the perceived quality of long-form text generation, Contrastive Decoding searches for strings that maximize a weighted difference in likelihood between strong and weak models. We show that Contrastive Decoding leads LLaMA-65B to outperform LLaMA 2, GPT-3.5 and PaLM 2-L on the HellaSwag commonsense reasoning benchmark, and to outperform LLaMA 2, GPT-3.5 and PaLM-540B on the GSM8K math word reasoning benchmark, in addition to improvements on a collection of other tasks. Analysis suggests that Contrastive Decoding improves over existing methods by preventing some abstract reasoning errors, as well as by avoiding simpler modes such as copying sections of the input during chain-of-thought. Overall, Contrastive Decoding outperforms nucleus sampling for long-form generation and greedy decoding for reasoning tasks, making it a powerful general purpose method for generating text from language models. Figure 1: Contrastive decoding improves reasoning across model scales and reasoning tasks. Figure 2: Contrastive scoring significantly improves performance on HellaSwag, a standard commonsense reasoning benchmark. 1 INTRODUCTION Text is generated from large language models (LLMs) in different ways for different tasks. For open-ended text generation tasks, truncated sampling is normally used, as the most likely strings under a model tend to be short and uninteresting (Holtzman et al., 2020). For reasoning problems, greedy decoding is normally preferred, to avoid risking sampling errors. This bifurcation is undesirable; for example it increases the likelihood of reasoning errors during open-ended generation. We explore the use of Contrastive Decoding (Li et al., 2022) for solving reasoning problems with LLMs. Contrastive Decoding (CD) searches for strings that maximize a weighted difference in Figure 3: CD accentuates what the expert model has learned that the amateur model has not. Results are taken from greedy decoding with a 65B parameter expert, using $\alpha = 0.1$, $\beta = 0.5$ for CD. likelihood between a stronger *expert* and a weaker *amateur* model, and was shown to outperform existing methods for open-ended text generation. It achieves this by avoiding undesirable modes of the expert model’s distribution, such as short or generic strings, which tend to be the most likely under any model, including the amateur. We show that Contrastive Decoding outperforms greedy decoding on reasoning problems. On GSM8K, a widely used benchmark consisting of grade-school word math problems, contrastive decoding improves the performance of various LLaMA models by up to 8 absolute percentage points. This result outperforms LLaMA 2, which has 5 billion more parameters and is trained on 40% more data. On HellaSwag, using the CD objective to rank answers leads LLaMA to outperform all existing models except GPT-4. We find general improvement on arithmetic reasoning and multiple-choice ranking tasks, including on models as large as LLaMA-65B, suggesting that Contrastive Decoding could bring such widespread improvements to much larger models. We also analyze the cause of the improvement from Contrastive Decoding. Empirically, we find that Contrastive Decoding performs less surface-level copying from the prompt than greedy decoding and misses fewer reasoning steps. This result suggests that, similarly to findings in Li et al. (2022), Contrastive Decoding works by reducing repetitive or other undesirable modes of the model distribution. Our current method yields mixed results for commonsense reasoning tasks and slightly degrades factual retrieval, both trends that encourage further refinement of the method. Overall, we show that Contrastive Decoding not only substantially improves LLM accuracies on a range of benchmarks, but is also the first generation algorithm to achieve state-of-the-art results in both reasoning and text generation problems. These results allow a more unified method for improving generation from language models across tasks. 2 CONTRASTIVE DECODING 2.1 SIMPLIFIED FORMULATION The original Contrastive Decoding formulation from Li et al. (2022) explicitly chooses two parameters: $\alpha$ and the intermediate temperature of the amateur distribution $\tau_a$, with the intermediate temperature of the expert fixed at $\tau_e = 1$. We slightly refactor the hyperparameter choice to be more interpretable and simplify the algorithm by working directly in logit space. Let $s^{(i)}_a$ and $s^{(i)}_e$ be the unnormalized scores (logits) assigned to token $i$ by the amateur and expert models, respectively. $\alpha$ is the same hyperparameter in the original paper: a proportion of the maximum probability assigned by the expert model, with any tokens assigned a lower probability masked out. $\beta$ is a hyperparameter corresponding to the strength of the amateur penalty. We include a leading $(1 + \beta)$ coefficient to the expert logits to decouple the strength of the contrastive penalty from the expected scale of the output logits, cleanly delineating between the contrastive tradeoff and the final sampling temperature. This matches the formulation of DExperts (Liu et al., 2021), with the expert model serving both as the base prior and steering expert. \[ V_{\text{valid}} = \{ j \in V, s_e^{(j)} \geq \log \alpha + \max_{k \in V} s_e^{(k)} \} \] \[ s_{CD}^{(i)} = \begin{cases} (1 + \beta)s_e^{(i)} - \beta s_a^{(i)} & i \in V_{\text{valid}} \\ -\infty & i \not\in V_{\text{valid}} \end{cases} \] A PyTorch implementation for this formulation, as well as the original, can be found in subsection A.1 of the appendix. Our implementation takes three lines of readable code. ### 2.2 Probabilistic Interpretation Our implementation of $\alpha$-masking has the same interpretation as in Li et al. (2022), given that the expert temperature is fixed to $\tau_e = 1$. We show the equivalence in Appendix A.2. Further, we can consider the post-softmax probabilities produced by CD as a perturbation of the probabilities predicted by the expert model. Not including $\alpha$-masking, the probability assigned to token $i$ by CD is a normalized adjustment of the probability assigned by the expert model: \[ p_{CD}^{(i)} \propto p_e^{(i)} \left( \frac{p_e^{(i)}}{p_a^{(i)}} \right)^\beta \] (1) It is therefore clear that as $\beta \to 0$ the contrastive penalty disappears, and as $\beta \to \infty$ the distribution collapses to the argmax of $p_e^{(i)}/p_a^{(i)}$, which is the original formulation from Li et al. (2022). ### 3 Experiments #### 3.1 Experimental Setup **Models.** We use untuned models from the LLaMA 1 family (Touvron et al., 2023) at all scales. Unless otherwise stated, we use an untuned LLaMA-65B as the expert and an untuned, LLaMA-architecture model with 1.5B parameters trained on the same data as the other LLaMA 1 models as an amateur. For one ablation study, we use models from the FLAN-T5 family (Chung et al., 2022). **Decoding Parameters.** We set $\beta = 0.5$ and $\alpha = 0.1$ for all experiments unless otherwise stated. We use greedy decoding, except for self-consistency experiments for which we sample at $\tau = 0.7$ following Touvron et al. (2023). **Prompting.** For generation tasks, we use 8-shot chain-of-thought prompting, in line with Touvron et al. (2023). The examples are the same as in LLaMA for tasks contained in that paper, and taken from Wei et al. (2023) for other mathematical tasks. **Datasets.** Following prior works, we evaluate on a number of datasets. The following tasks measure performance on algebraic word problems: AQaA (Ling et al., 2017), ASDiv (Miao et al., 2021), GSM8K (Cobbe et al., 2021), and SVAMP (Patel et al., 2021). We also evaluate on MATH (Hendrycks et al., 2021b), a larger and more challenging benchmark. For commonsense reasoning, we measure open-ended performance on CommonsenseQA (Talmor et al., 2019) and StrategyQA (Geva et al., 2021). We also evaluate on a battery of multiple-choice reasoning benchmarks: both the easy and challenge splits of the AI2 Reasoning Challenge dataset (Clark et al., 2018), BoolQ (Clark et al., 2019), HellaSwag (Zellers et al., 2019), MMLU (Hendrycks et al., 2021a), PIQA (Bisk et al., 2019), SIQA (Sap et al., 2019), and WinoGrande (Sakaguchi et al., 2019). ### 3.2 Hyperparameter Selection Contrastive decoding has three major hyperparameters: the masking ratio $\alpha$, the contrastive strength $\beta$ and the size of the amateur model. We find that results are fairly insensitive to $\alpha$ as long as $\beta$ is reasonably small (below 1); unless otherwise stated we use $\alpha = 0.1$ across experiments. Next we consider the size of the amateur model. In agreement with Li et al. (2022), we find that performance benefits from smaller amateur models (Figure 4), while a 7B-parameter amateur helps reasoning performance, a 7B-parameter amateur harms it. We also examine different types of amateurs; ablation studies show that a partially-trained amateur performs better than a fully-trained one, and that a poorly-prompted expert can be successfully used as an amateur as well (see subsection 4.2). Finally, we examine the effect of $\beta$. The optimal value depends on the task, but for both generation tasks like GSM8K and multiple-choice ranking tasks like PIQA we find that $\beta = 0.5$ performs well. Setting $\beta$ too high can place too much weight in the contrastive penalty and harm performance, especially with a larger gap between amateur and expert models. $\beta = 0$ corresponds to standard greedy decoding with no contrastive penalty. Results of $\beta$ hyperparameter sweeps can be found in Table 1, Figure 4, Figure 5 and Appendix B. The best result on GSM8K, with LLaMA-65B and $\beta = 0.25$, is 57.7 (Table 1), outperforming PaLM-540B (56.5), LLaMA-2 (56.8) and GPT-3.5 (57.1) (Amil et al., 2023; OpenAI, 2023). ![Figure 4: Results on GSM8K with LLaMA-65B as the expert. While a 7B amateur harms performance, a 1.5B amateur helps.](image) | Expert | $\beta = 0$ | $\beta = 0.25$ | $\beta = 0.5$ | $\beta = 1$ | |--------|-------------|----------------|---------------|--------------| | 7B | 10.7 | 11.5 | **13.6** | 11.0 | | 13B | 17.0 | 21.0 | **22.9** | 20.4 | | 30B | 35.2 | 40.0 | **43.4** | 42.0 | | 65B | 51.0 | **57.7** | 56.8 | 44.6 | Table 1: Results on GSM8K. $\beta = 0.5$ tends to give good results across expert sizes. ### 3.3 Arithmetic Reasoning We find that contrastive decoding tends to help on arithmetic reasoning tasks with chain-of-thought prompting; see Table 2 for all results. One exception to this is the MATH dataset, which proves to be challenging for both standard and contrastive decoding. We conjecture that because contrastive decoding amplifies skills that the expert has learned better than the amateur, it cannot help on tasks that are well beyond the expert’s ability. We also experiment with normalizing the $\alpha$-masked CD scores via softmax, then temperature sampling from the resulting distribution. This permits CD to generate multiple candidate reasoning chains to be used for self-consistency (taking the majority answer) (Wang et al., 2023b). We show across both mathematical and commonsense reasoning, CD improves self-consistency performance. OpenAI (2023) evaluates GPT-3.5 5-shot; all others are 8-shot. Figure 5: Two examples of sweeping through $\beta$ values on multiple-choice reasoning tasks across model scales. Dashed horizontal lines mark performance without contrastive decoding. Table 2: Results on math generation tasks. Contrastive decoding generally improves performance. | Model | CD | AQuA | ASDiv | GSM8K | MATH | SVAMP | Average | |---------|--------|-------|-------|-------|-------|-------|---------| | 7B | X | 21.0* | 40.2 | 10.7 | 3.0 | 27.3 | 20.4 | | 13B | X | 18.1* | 49.0 | 17.4 | 4.2 | 39.4 | 25.6 | | 30B | X | 23.8 | 60.1 | 35.3 | 6.9 | 55.9 | 36.4 | | 65B | X | 33.3 | 67.2 | 51.0 | 10.6 | 69.1 | 46.2 | | 65B maj@20 | X | 38.2 | 73.6 | 68.0 | –† | 77.3 | 64.3 | | 7B | ✓ | 19.0* | 39.7 | 14.3 | 2.9 | 31.5 | 21.5 (+1.1) | | 13B | ✓ | 16.0* | 52.0 | 22.7 | 3.8 | 43.1 | 27.5 (+1.9) | | 30B | ✓ | 29.8 | 62.5 | 43.1 | 8.1 | 59.3 | 40.6 (+4.2) | | 65B | ✓ | 36.9 | 71.9 | 56.8 | 10.3 | 67.8 | 48.7 (+2.5) | | 65B maj@20 | ✓ | 39.4 | 77.4 | 74.0 | –† | 79.0 | 67.5 (+3.2) | 3.4 Commonsense Reasoning Results are more mixed for CommonsenseQA and StrategyQA. For both of these tasks, we 8-shot prompt our model and compute the exact match score against the ground-truth answers. We find that contrastive decoding harms performance for smaller models, but that this harm equalizes somewhat for the 65B model and evens out when using self-consistency. See Table 3 for full results. Table 3: CD harms commonsense reasoning with a smaller expert, but performance evens out with a larger expert-amateur gap. | Model | CD | CSQA | StrategyQA | Average | |---------|--------|-------|------------|---------| | 7B | X | 40.0 | 59.2 | 49.6 | | 13B | X | 60.4 | 64.5 | 62.5 | | 30B | X | 66.4 | 68.7 | 67.6 | | 65B | X | 77.5 | 69.5 | 73.5 | | 65B maj@20 | X | 77.0 | 79.3 | 78.2 | | 7B | ✓ | 37.3 | 58.3 | 47.8 (-1.8) | | 13B | ✓ | 58.5 | 65.5 | 62.0 (-0.5) | | 30B | ✓ | 62.8 | 67.6 | 65.2 (-2.4) | | 65B | ✓ | 77.1 | 71.5 | 74.3 (+0.8) | | 65B maj@20 | ✓ | 77.9 | 79.3 | 78.6 (+0.4) | *In the AQuA task, the model selects one out of five given options. Thus the random baseline is 20%, and results below that threshold are not meaningful. †Given the size of the dataset and length of generations, we do not evaluate maj @ 20 on MATH. 3.5 Contrastive Ranking We further evaluate a contrastive objective as a scoring function to rank answers to multiple-choice questions. These tasks are zero-shot, multiple-choice cloze tasks; instead of open-ended generation the model scores each potential completion, length-normalizing following [Touvron et al., 2023]. We find comparable performance across most tasks, with more substantive gains on HellaSwag and ARC-Challenge. Notably, on HellaSwag CD leads LLaMA-65B to score 88.0, which outperforms LLaMA-2 (85.3), GPT-3.5 (85.5) [OpenAI, 2023] and PALM 2-Large (86.8) [Anil et al., 2023]. Table 4: Results on multiple-choice reasoning tasks. CD generally provides a modest boost. | $\beta$ | ARC-E | ARC-C | BoolQ | HSwag | PIQA | SIQA | WGrande | MMLU | Avg | |---------|-------|-------|-------|-------|------|------|---------|------|-----| | 0.0 | **79.1** | 56.1 | 84.2 | 84.2 | 82.6 | 52.3 | 77.3 | **63.5** | 72.4 | | 0.5 | 79.0 | 59.5 | **84.3** | 87.4 | **83.1** | 53.3 | 77.8 | 63.4 | **74.9** | | 1.0 | 76.9 | **59.7** | 84.1 | **88.0** | 82.9 | **53.3** | 76.5 | 63.2 | 74.5 | 4 Additional Studies 4.1 Effects of Contrastive Decoding CD is worse at arithmetic but better at logical reasoning. We conduct a manual error analysis of 100 randomly selected examples from the GSM8K set between continuations from greedy decoding and CD ($\beta = 0.5$, $\alpha = 0.1$). We follow [Wang et al., 2023a] and categorize wrong answers as primarily being due to an arithmetic error, a missing step or a semantic misunderstanding. We add one category of “degeneration,” chosen when the model lapses into excessive repetition. Our small-scale analysis finds that CD makes more arithmetic errors, but that this is offset by better semantic reasoning and fewer missing steps (see Table 5). Table 5: Proportion of errors in of a set of 100 GSM8K questions. CD makes more arithmetic errors, but omits fewer steps and avoids semantic misunderstandings. | CD | Arithmetic | Missing Step | Semantic | Degeneration | Total Errors | |----|------------|--------------|----------|--------------|--------------| | ✗ | 4% | 22% | 24% | 4% | 54% | | ✔ | 8% | 20% | 21% | 3% | 52% | To further explore the claim that the benefit of CD does not stem from arithmetic evaluation, we generate a toy dataset of 1,0000 multiplication and subtraction equations with operands up to four digits and then 8-shot prompt models to complete the expression, measuring exact match accuracy. We find that CD does not improve performance on this task, and in fact may degrade it slightly. Results are shown in Table 6. Table 6: High-level generation statistics from sampled generations on GSM8K. Responses are similar lengths, despite the performance improvement from CD. | | Standard | CD | |------------------|----------|------| | Correct % | 44.6 | **51.1** | | Parseable % | 95.2 | **95.6** | | Average # chars | 215.2 | 217.2 | Figure 6: CD reduces copying from the question in the generated Chain of Thought, as measured by n-gram overlap on GSM8K generations. CD reduces copying from the prompt. We analyze 26,000 sampled generations from CD-sampling on GSM8K against the corresponding set from temperature sampling; both of these sets of generations are used in our self-consistency study. We find that responses are roughly the same length and follow the few-shot template roughly the same proportion of the time. This rules out the hypothesis that contrastive decoding simply leads the model to follow the template better, prevents degeneration or induces longer answers with more reasoning steps. Further, we run an automatic evaluation of greedy generations using ROSCOE (Golovneva et al., 2022) but do not find significant differences in any of these metrics. However, we measure the precision and recall of the tokens in the prompt by the sampled generations and find that CD systematically reduces token-level copying from the prompt. This may be related to increased reasoning ability, as surface-level copying from the prompt does not provide new information to the problem. **CD can harm factual recall.** Our primary claim is that contrastive decoding improves chain-of-thought reasoning. However, we also test CD on two pure factual-recall tests that do not utilize chain-of-thought: OpenBookQA (Mihaylov et al., 2018) and TriviaQA (Joshi et al., 2017). OpenBookQA (“OBQA”), is a multiple-choice completion task, while TriviaQA is a 5-shot generation task. Reusing the same setup from reasoning leads to a slight degradation of performance, as seen in Table 7. | CD | OBQA | TriviaQA* | |----|------|-----------| | ✗ | 60.0 | 72.2 | | ✔ | 57.8 ±(2.4) | 69.9 ±(2.1) | **CD outperforms other reasoning enhancements in FLOP efficiency.** We note that contrastive decoding introduces relatively little overhead in comparison to other reasoning-enhancing methods. We estimate that with a 1.5B amateur and 65.2B expert, contrastive decoding increases the total number of FLOPs by 3.25% (see section C of the appendix). This compares favorably to self-consistency, which requires several extra full generation loops. We show in Figure 9 that CD is significantly more efficient than self-consistency. ### 4.2 Ablation Studies **α-masking alone is not enough.** When sampling and performing self-consistency, α-masking prevents the sampling of tokens the expert finds to be unlikely. It is natural to ask what portion of the benefit comes purely from α-masking and not the contrastive objective itself. To answer this, we set β = 0 but α = 0.1; that is, we mask out candidates based on the expert but do not apply the contrastive objective. When sampling one path, we expect α-masking to improve over temperature sampling alone as it eliminates unlikely results and thus provides a closer approximation to greedy sampling. This holds, but as we increase the number of paths we find no benefit from α-masking alone. This suggests that the contrastive objective, and not α-masking, is the primary source of improved self-consistency results. See Figure 7 for results of this ablation. **CD requires chain-of-thought prompting to improve results.** We next study whether contrastive decoding provides an advantage in the absence of chain-of-thought prompting. We remove the chains of thought from the GSM8K fewshot prompt, and find that as expected performance drops for both standard and contrastive decoding (Figure 8); further, without chains of thought contrastive decoding provides no consistent improvement. As with the MATH dataset, solving problems without explicit reasoning steps may be too challenging of a task for the expert model, and thus leave too small a gap between the expert and amateur to contrastively exploit. **CD can benefit non-LLaMA models.** We conduct a short study to show that CD can benefit models outside of the LLaMA family. For this study, we choose the FLAN-T5 family as it is open-source, has a wide range of model sizes that share a single tokenizer, and obtains good performance on chain-of-thought reasoning tasks. We use FLAN-T5-XXL (11B) as the expert model and FLAN-T5-Small (80M) as amateur. We evaluate on GSM8K using the 8-shot random prompts from Fu et al. (2022). --- *On manual examination, we find the set of correct answers provided by TriviaQA to be insufficient. Randomly sampling 100 supposedly incorrect answers generated by CD and standard decoding, we find roughly half are in fact correct (46/100 with CD and 49/100 without). A rough linear extrapolation gives us estimates for non-CD and CD scores of 85.8 and 83.7, respectively.* Figure 7: GSM8K scores via temperature sampling and maj @ k with various values of k. α-masking alone does not yield significant improvement, while full CD does. Figure 8: Comparison of GSM8K scores with LLaMA-65B, both with and without chain-of-thought prompts. CD only helps when using CoT. note that GSM8K is within the set of tasks that FLAN-T5 is finetuned on. CD provides a slight boost in performance, as seen in Table 9. We leave more extensive experiments on other families of models to future work. Figure 9: FLOP increases, with increasing compute from using more samples for self-consistency. CD achieves similar or better performance with a smaller increase in FLOPs. Table 9: FLAN-T5 performance on GSM8K. CD provides a boost to performance. Small-scale amateurs beat “negative prompting.” We experiment to determine if there is a more effective weak amateur model to use for contrastive decoding. We define a set of “negative prompts” by sampling 7B model outputs on the fewshot prompts and collecting the incorrect responses. We use these responses as fewshot prompts to mimic the failure modes of the family of models. These negative prompts should harm the performance of models they are prompted with, and specifically bias results towards the error distribution of the 65B model. We find that contrasting with a negative prompt does not harm performance, but does not improve it as much as contrasting with a small amateur (see Table 10). In an ablation study, we find that negative prompting does not harm performance that much; prompting a 65B model with incorrect fewshot examples on GSM8K gives a score of 41.3, which underperforms prompting with correct examples (51.2) but significantly beats non-chain-of-thought prompting (13.5). This supports Wang et al. (2023a), who find that even incorrect chain-of-thought rationales improve reasoning. A prompting strategy which better incapacitates the expert model might yield better results. Mid-training checkpoints make for good amateurs. We experiment with checkpoints of a mid-training 7B-parameter LLaMA model taken 10% and 23% of the way through the full training run. Even while a fully-trained 7B amateur harms performance on GSM8K, we find that a partially-trained amateur improves performance. We do not perform extensive hyperparameter sweeps here, instead reusing \( \alpha = 0.1, \beta = 0.5 \) as before. We do not pursue partially-trained amateurs for our main results as results may vary based on the order of training data, but this result allows us to interpret contrastive decoding as a first-order optimization step over the output of a model, highlighting the high-level behaviors that it learns later on in the course of training. See Table 11 for full results. Table 10: On GSM8K, negative prompting outperforms greedy decoding but weakens CD. | Expert | Greedy | NP | CD | CD + NP | |--------|--------|----|----|---------| | 7B | 10.7 | 11.4 | 14.3 | 12.7 | | 13B | 17.4 | 17.5 | 22.7 | 20.7 | | 30B | 35.3 | 36.9 | 43.1 | 42.9 | | 65B | 51.0 | 52.0 | 56.8 | 54.7 | Table 11: Early-training checkpoints can be good amateurs, even when late-stage checkpoints harm performance. | Amateur | Amateur Tokens | GSM8K | |---------|----------------|-------| | 7B | 130B | 57.0 | | 7B | 300B | 56.8 | | 7B | 1.3T | 49.9 | 5 RELATED WORK Steering methods for reasoning. Other works more explicitly model the error distribution of reasoning steps and use this to steer decoding. For example GRACE (Khalifa et al., 2023) uses a contrastive loss to train an external step-level discriminator, which it then uses to select between candidate steps sampled from a base model. Using the interpretation of contrastive decoding as mutual distinguishability between amateur and expert, we see that our method is close to FUDGE (Yang & Klein, 2021), where the binary predictor is an estimate of the probability that the generated token has come from the expert rather than the amateur. Prompting Methods for Reasoning. There are many recent prompting methods to improve language model reasoning; see Qiao et al. (2023) for a survey. We perform our experiments with chain-of-thought prompting (Wei et al., 2023). Sampling methods. Several decoding methods exist to improve the quality of generations from large language models. For open-ended generation, truncated sampling schemes like top-\(k\) sampling (Fan et al., 2018), nucleus sampling (Holtzman et al., 2020) and typical sampling (Meister et al., 2023) have been shown to reduce repetition in comparison to greedy decoding and beam search while producing more coherent generations than standard temperature sampling. However, sampling can still introduce errors into logical chains, and so greedy decoding is used to more effectively solve reasoning tasks. (Wei et al., 2023; Anil et al., 2023) Contrastive Generation Methods. Our formulation’s objective can be interpreted as a special case of DExperts (Liu et al., 2021), using the larger model as both an expert and base LM prior. Yona et al. (2023) identify model biases with Contrastive Input Decoding, a contrastive-decoding-style technique similar to negative prompting that operates on perturbed text inputs. Concurrently to our work, Chuang et al. (2023) propose DoLA, which improves factuality and reasoning through contrastive decoding between the predictions of later layers and earlier layers in a language model. We study a wider array of reasoning tasks and demonstrate that a 7B amateur is too large, finding greater gains in reasoning just by scaling down the amateur to 1.5B parameters. Our paper differentiates itself from Li et al. (2022), which initially proposed Contrastive Decoding, in several ways: by testing on standard reasoning benchmarks, by our exploration of \(\beta\) as a hyperparameter, by ablations with various types of amateurs, and by a careful analysis of the combination of Contrastive Decoding with chain-of-thought prompting and self-consistency. 6 LIMITATIONS Our investigation is also limited mainly to the LLaMA family of models. While the method continues to provide benefit to larger LLaMA models, further work is required to definitively establish the effect of contrastive decoding on larger, tuned models. 7 CONCLUSION Our study shows that contrastive decoding can improve chain-of-thought reasoning in large language models. While challenges like factual recall remain, this strengthens the case for contrastive decoding as a simple, general-purpose method to elicit more desirable behavior from large language models. REPRODUCIBILITY STATEMENT The training process and model architecture for the 1.5B-parameter LLaMA model used as the amateur in several results is publicly available, but the weights are not, which limits public reproducibility of results relying on that model. The results on FLAN-T5, as well as the negative-prompting study and examination of 7B-LLaMA as an amateur, are all built on entirely open-source models and data. REFERENCES Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siakam Shakeri, Emanuel Taropa, Paige Bailey, and Zhifeng Chen et al. Palm 2 technical report, 2023. Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about physical commonsense in natural language, 2019. Yung-Sung Chuang, Yujia Xie, Hongyin Luo, Yoon Kim, James Glass, and Pengcheng He. Dola: Decoding by contrasting layers improves factuality in large language models, 2023. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellet, Kevin Robinson, Dasha Valter, Sharar Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. Scaling instruction-finetuned language models, 2022. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions, 2019. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge, 2018. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems, 2021. Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story generation, 2018. Yao Fu, Litu Ou, Mingyu Chen, Yuhao Wan, Hao Peng, and Tushar Khot. Chain-of-thought hub: A continuous effort to measure large language models’ reasoning performance, 2023. Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies, 2021. Olga Golovneva, Moya Chen, Spencer Poff, Martin Corredor, Luke Zettlemoyer, Maryam Fazel-Zarandi, and Asli Celikyilmaz. Roscoe: A suite of metrics for scoring step-by-step reasoning, 2022. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding, 2021a. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset, 2021b. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration, 2020. Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension, 2017.
dRel8fuUK4
While reference records are not always a direct input in other membership inference attacks, is there any way to assess whether the competing methods would depend more strongly or weakly on the number of reference records, compared to the new attack?
LOW-COST HIGH-POWER MEMBERSHIP INFERENCE BY BOOSTING RELATIVITY Anonymous authors Paper under double-blind review ABSTRACT We present a robust membership inference attack (RMIA) that amplifies the distinction between population data and the training data on any target model, by effectively leveraging both reference models and reference data in our likelihood ratio test. Our algorithm exhibits superior test power (true-positive rate) when compared to prior methods, even at extremely low false-positive error rates (as low as 0). Also, under computation constraints, where only a limited number of reference models (as few as 1) are available, our method performs exceptionally well, unlike some prior attacks that approach random guessing in such scenarios. Our method lays the groundwork for cost-effective and practical yet powerful and robust privacy risk analysis of machine learning algorithms. 1 INTRODUCTION Membership inference attacks (MIA) determine whether a specific data point has been used in training of a model (Shokri et al., 2017). These attacks represent a foundational tool in evaluating the privacy risks of unintentional exposure of information due to training machine learning models on different types of data in a wide range of scenarios. These scenarios encompass diverse settings such as statistical models (Homer et al., 2008; Backes et al., 2016; Sankararaman et al., 2009; Murakonda et al., 2021), machine learning as a service (Shokri et al., 2017), federated learning (Nasr et al., 2019; Li et al., 2023; Jagielski et al., 2023), generative models (Carlini et al., 2021), and also privacy-preserving machine-learning (Steinke et al., 2023; Nasr et al., 2021; Jagielski et al., 2020). Membership inference attacks originated within the realm of summary statistics on high-dimensional data (Homer et al., 2008). In this context, multiple hypothesis testing methods were developed to optimize the trade-off between test power and associated errors for relatively straightforward computations (Sankararaman et al., 2009; Dwork et al., 2015; Murakonda et al., 2021). For deep learning algorithms, these tests evolved from using machine learning itself to perform the membership inference test (Shokri et al., 2017) to using various approximations of the original statistical tests (Sablayrolles et al., 2019; Ye et al., 2022; Carlini et al., 2022; Watson et al., 2022b). They also vary based on the assumptions about threat models, as well as the amount of computation needed to tailor the attacks to specific data points and models (e.g., global attacks (Shokri et al., 2017; Yeom et al., 2018) versus per-sample tailored attacks (Ye et al., 2022; Carlini et al., 2022; Sablayrolles et al., 2019; Watson et al., 2022b)) which necessitate training a large number of reference models. Even though there have been substantial improvements in the effectiveness of attacks, their computational expense has rendered them useless for practical privacy auditing. This is because the auditor would have to dedicate significantly more resources to performing the privacy test than they would to training the model itself. As we demonstrate in this paper, with a constrained computation budget, some of these attacks, e.g., (Carlini et al., 2022), verge on random guessing, for generalized models. Furthermore, prior state-of-the-art attacks (Ye et al., 2022; Carlini et al., 2022) use seemingly distinct likelihood ratio tests, and beyond empirical assessments, they do not provide a clear, interpretable means of comparison. Also, as evidenced both in their papers and reproduced in ours, these attacks exhibit mutual dominance, dominating each other depending on the test scenarios, such as variations in the number of reference models. This calls for more robust yet efficient attacks. We design robust attack algorithms that consistently achieve a high TPR on a limited computation budget (specifically given a few reference models), while maintaining effectiveness across all FPR, even as small as 0. Our attack (RMIA) dominates prior work in all scenarios. Figure 1: The performance comparison between our attack (RMIA) and the prior works (including Attack-R (Ye et al., 2022) and LiRA (Carlini et al., 2022)), under computation constraints, with the restriction of using only 1 reference model, for attacking one single model. LiRA approaches random guessing, and RMIA outperforms other attacks throughout the TPR-FPR trade-off curve. Our method outperforms prior state-of-the-art attacks Attack-R (Ye et al., 2022) and LiRA (Carlini et al., 2022) across all datasets, by achieving $5 - 10\%$ higher AUC and a remarkable $2\times$ to $4\times$ higher TPR at low FPRs, when using 2 reference models. When considering just 1 single reference model, the improvement in AUC reaches $26\%$, compared with LiRA. See Figure[1]. When dealing with a few reference models, Attack-R mainly suffers from low TPR at low FPR, while LiRA fails to get a competitive AUC score beyond random guessing. In an offline scenario where the adversary exclusively uses pre-trained reference models, RMIA demonstrates an impressive $28\%$ higher AUC and $3\times$ better TPR at zero FPR compared to LiRA. The offline version of our attack shows a comparable performance to online attacks, as we aim to avoid the huge cost associated with training online reference models. We also explore the effects of increasing available resources up to the levels used in the prior works. Even though the other methods show a reasonable performance when using large number (over 250) of reference models, our method still dominates them on benchmark datasets. Our key innovation lies in designing a novel membership inference attack that effectively incorporates both population data as well as models trained on them as reference points in our hypothesis test. When modeling the hull hypothesis, the prior work mostly compute the average likelihood of the target data point not being included in the training set. To improve the power of the test, we distinguish between the worlds in which the target data point could have been replaced with any random sample from the population. Thus, the optimal adversary strategy calculates the likelihood ratio over the null hypothesis tied to random samples from the population. The computation of the likelihood ratio itself requires comparison with reference models. Essentially, our membership inference test efficiently gauges the multiplicative distance between the probability of the target data point and a random sample from the population, when computed on the target model versus when it is computed from reference models. Thus, our attack is finely calibrated to the interplay between data points and their relative probabilities in relation to models. This enhances the differentiation between member and non-member data points, enabling more precise estimation of test statistics and yielding a more resilient test. A significant aspect of our framework and attack is its foundational nature; prior attacks can be framed as simplifications of ours. Our interpretation of the prior work, using our framework, reveals the implicit assumptions and approximations in other methods, shedding light on their reduced performance and instability. Through extensive empirical analysis on benchmark datasets, we investigate the impact of varying the number of models, the number of required inference queries, the similarity of reference models to the target model, and the parameters in the attack construction. In all these scenarios, we notice instabilities in the effectiveness of the prior attacks, depending on the settings. For example, in offline setting or when using a few reference models, Attack-R dominates LiRA, but when using many reference models in the online setting, LiRA outperforms Attack-R. However, even when considering worst-case scenarios, RMIA consistently outperforms prior attacks in all settings. 2 Our Method Membership inference attack (MIA) algorithms aim to determine whether a specific data point \( x \) was used in the training of a given machine learning model \( \theta \). The concept of a membership inference attack is modeled as an indistinguishability game between a challenger (the algorithm) and an adversary (the privacy auditor) [Ye et al., 2022; Carlini et al., 2022; Yeom et al., 2018]. We present the standard MIA game in Definition 1. For a comprehensive understanding of membership inference games, see [Ye et al., 2022], and for their connection to other inference attack games, see [Salem et al., 2023]. Essentially, there are two scenarios or worlds. In one world, the model \( \theta \) is trained including \( x \) in the training set, whereas in the other, it excludes \( x \). The adversary is randomly positioned in one of these worlds and tasked with inferring which world he is in, using only data point \( x \), the trained model \( \theta \), and his background knowledge about the data distribution. **Definition 1 (Membership Inference Game)** ([Shokri et al., 2017; Yeom et al., 2018; Carlini et al., 2022; Ye et al., 2022]) Let \( \pi \) be the data distribution, and let \( T \) be the training algorithm. - The challenger samples a training dataset \( S \sim \pi \), and trains a model \( \theta \sim T(S) \). - The challenger flips a fair coin \( b \). If \( b = 1 \), it randomly samples a data point \( x \) from \( S \). Otherwise, it samples \( x \sim \pi \), such that \( x \notin S \). The challenger sends the target model \( \theta \) and the target data point \( x \) to the adversary. - The adversary, having access to the distribution over the population data \( \pi \), outputs a membership prediction bit \( \hat{b} \leftarrow \text{MIA}(x; \theta) \). A membership inference attack is a hypothesis testing problem that assigns a membership score \( \text{Score}_{\text{MIA}}(x; \theta) \) to every pair of \((x, \theta)\), and outputs a membership bit through comparing the score with a threshold \( \beta \) [Yeom et al., 2018; Carlini et al., 2022; Ye et al., 2022]: \[ \text{MIA}(x; \theta) = 1_{\text{Score}_{\text{MIA}}(x; \theta) \geq \beta} \] For any given threshold \( \beta \), the adversary’s power, defined as the true positive rate of the attack, and error or the false positive rate, are quantified over numerous repetitions of this experiment. The threshold \( \beta \) controls how much error the adversary is willing to tolerate. The universal goal for designing membership inference attacks is to maximize the adversary’s power for any false-positive error rate. The (lower-bound) leakage of the algorithm is defined as the power-error trade-off curve (the ROC curve), which is derived from the outcome of game experiments across all values of \( \beta \). 2.1 Designing Our Membership Inference Attack The key to the design of membership inference attacks is the formulation of the hypothesis test, the construction of the two types of worlds (where \( x \) is member of the training set of \( \theta \) in one, and is a non-member in the other), and the evaluation of the corresponding likelihood ratio tests. Prior works simplify the construction of the worlds, which results in low-power, unstable, and average-case tests. Here, we design a fine-grained construction of the worlds in the following way. We compose the null hypothesis as the worlds in which the target data point \( x \) is replaced with a random data point \( z \) from the population. Thus, we design many pairwise likelihood ratio tests to test the membership of a data point \( x \) relative to other data point \( z \). To reject the null hypothesis, we need to collect a significant amount of evidence that the probability of \( \theta \) for \( x \) being in the training set is larger than the probability of \( \theta \) when instead a random \( z \) is in the training set. This approach --- 1 Note that both \( S \) and \( x \) can also be selected by the adversary to model the worst-case scenarios as described in the construction of MIA games [Ye et al., 2022] to reflect the maximum leakage corresponding to the differential privacy bound [Dwork et al., 2006]. Our problem formulation and the tests can also apply to the cases where adversary controls sources of randomness in data sampling, but in the main text we focus on data points randomly sampled from the population. 2 Power is the fraction of times \( \hat{b} = 1 \), given \( b = 1 \). Error is the fraction of times \( \hat{b} = 1 \), given \( b = 0 \). 3 Likelihood ratio test is the best technique that the adversary can choose [Sankararaman et al., 2009; Mufakonda et al., 2021; Ye et al., 2022; Carlini et al., 2022]. provides a much more fine-grained analysis of leakage, and differentiates between the worlds in which \( x \) is not a member. The likelihood ratio corresponding to the pair of \( x \) and \( z \) is: \[ LR_\theta(x, z) = \frac{Pr(\theta|x)}{Pr(\theta|z)}, \] where \( Pr(\theta|.) \) is computed over the randomness of the training algorithm (e.g., SGD). The term \( Pr(\theta|x) \) is the probability that the algorithm produces the model \( \theta \) given that \( x \) was in the training set, while the rest of the training set is randomly sampled from the population distribution \( \pi \). In the next subsection, we explain the process for computing \( LR_\theta(x, z) \), which requires having access to reference models. Given \( LR_\theta(x, z) \), we formulate the hypothesis test for our novel membership inference attack, which essentially is a test for violation of privacy, as follows: \[ \text{Score}_{\text{MIA}}(x; \theta) = \Pr_{z \sim \pi} \left( LR_\theta(x, z) \geq \gamma \right) \] We measure the probability that \( x \) can \( \gamma \)-dominate a random sample \( z \) from the population. The threshold \( \gamma \geq 1 \) determines how much larger the probability of learning \( \theta \) with \( x \) as a training data should be relative to a random alternative point \( z \) to pass our test. As \( \gamma \) increases, the test looks for an evidence of high leakage, but the chance of finding such dominated \( z \) samples decreases (especially when the model has generalized). The standard threshold \( \beta \) in equation [1] specifies that there should be a sufficient fraction of the randomly sampled population data to affirm that \( x \) is a member. By performing the test over \( \beta \in [0, 1] \), we can compute the ROC power-error trade-off curve corresponding to the membership inference attack. ### 2.2 Computing the Likelihood Ratio We can apply the Bayes rule to compute the likelihood ratio equation [2]: \[ LR_\theta(x, z) = \left( \frac{Pr(x|\theta) Pr(\theta)}{Pr(x)} \right) \cdot \left( \frac{Pr(z|\theta) Pr(\theta)}{Pr(z)} \right)^{-1} = \left( \frac{Pr(x|\theta)}{Pr(x)} \right) \cdot \left( \frac{Pr(z|\theta)}{Pr(z)} \right)^{-1} \] Here, \( Pr(x|\theta) \) is the likelihood function of model \( \theta \) evaluated on data point \( x \). In the case of classification models, usually the loss function is the negative log likelihood. So, \( Pr(x|\theta) \) is equivalent to the normalized score (SoftMax) of output of the model \( f_\theta(x_{\text{features}}) \) on class \( x_{\text{label}} \) (MacKay, 2003; Blundell et al., 2015). See Appendix A.10 for better alternatives to SoftMax for computing \( Pr(x|\theta) \). It is important to note that \( Pr(x) \) is not the same as \( \pi(x) \), which is rather the prior distribution over \( x \). The term \( Pr(x) \) is defined as the normalizing constant in the Bayes rule, and has to be computed by integrating over all models \( \theta' \) with the same structure and training data distribution as \( \theta \): \[ Pr(x) = \sum_{\theta'} Pr(x|\theta') Pr(\theta') = \sum_{D,\theta'} Pr(x|\theta') Pr(\theta'|D) Pr(D) \] We compute \( Pr(x) \) as the empirical mean of \( Pr(x|\theta') \) by sampling reference models \( \theta' \), each trained on random datasets \( D \) drawn from the population distribution \( \pi \). Note that the reference models must be sampled in an unbiased way with respect to whether \( x \) is part of their training data. This is because the summation in equation [5] is over all \( \theta' \), which can be partitioned to the set of models trained on \( x \) (IN models) and the set of models that are not trained on \( x \) (OUT models). See Appendix A.10.3 for the details of computing \( Pr(x) \) from IN and OUT models in the online attack setting, as well as its approximation in the offline setting where we only have OUT models. The same reasoning and computation process applies to \( Pr(z) \). #### 2.2.1 Membership Inference Attack Given the \( LR_\theta(x, z) \) computation in equation [4], we compute the \( \text{Score}_{\text{MIA}}(x; \theta) \) as in equation [3] and finally perform the membership inference test as in equation [1]. Definition 2 presents our attack procedure corresponding to the MIA game in Definition 1 (we provide a detailed pseudo-code in Appendix A.1). We assume adversary has access to random samples from the population, and also some reference models. Definition 2 (Relative Membership Inference Attack – RMIA) - **Input:** model $\theta$, data point $x$, and test parameters $\gamma$, $\beta$. - Sample many $z \sim \pi$, and compute $\text{Score}_{\text{MIA}}(x; \theta)$ as the fraction of $z$ samples that pass the relative membership inference likelihood ratio test $\text{LR}_\theta(x, z) \geq \gamma$. [See equation 3] - Return MEMBER if $\text{Score}_{\text{MIA}}(x; \theta) \geq \beta$, and NON-MEMBER otherwise. [See equation 7] We can further enhance the effectiveness of our attack by augmenting the MIA query with multiple data samples that are similar to $x$ (Carlini et al., 2022; Choquette-Choo et al., 2021). These data samples can be simple transformations of $x$ (for example, using shift or rotation in case of image data). To consolidate the results in our multi-query setting, we use majority voting on the hypothesis test: $x$ is considered to dominate $z$ if more than half of all generated transformations of $x$ dominate $z$. 2.3 Boosted Relativity of RMIA, and Design Improvements over Prior Attacks Membership inference attacks, framed as hypothesis tests, essentially compute the relative likelihood of observing $\theta$ given $x$’s membership in the training set of $\theta$ versus observing $\theta$ under $x$’s non-membership (null hypothesis). The key to a robust test is accounting for all information sources that distinguish these possible worlds. Membership inference attacks use references as anchors from the null hypothesis worlds, comparing the pair $(x, \theta)$ against them. Effectively designing the test involves leveraging all possible informative references, which could be either population data or models trained on it. Homer et al. (2008) and its follow up methods use population data as a reference, while Sankararaman et al. (2009) and its follow up methods use reference models trained on such data. Recent MIA methods have predominantly focused on using reference models. The way that such reference models are used matters a lot. As we show in our empirical evaluation, prior state-of-the-art attacks (Carlini et al., 2022; Ye et al., 2022) exhibit different behavior depending on the reference models (i.e., in different scenarios, they dominate each other in opposing ways). Also, even though they outperform attacks that are based on population data (e.g., the Attack-P formulation of Homer et al. (2008)) by a large margin, they do not strictly dominate them on all membership inference queries (Ye et al., 2022). They, thus, fall short due to overlooking some type of relativity. Table 1 summarizes the MIA scores of various attacks. Our method offers a fresh perspective on the problem. This approach leverages both population data and reference models, enhancing attack power and robustness against changes in adversary’s background knowledge. Our likelihood ratio test, as defined in equation 3 and equation 4, effectively measures the distinguishability between $x$ and any $z$ based on the shifts in their probabilities when conditioned on $\theta$, through contrasting $\Pr(x|\theta)/\Pr(x)$ and $\Pr(z|\theta)/\Pr(z)$. Notably, a class of strong prior attacks utilizing reference models, especially as seen in (Ye et al., 2022) and similar attacks, essentially mimic this test but neglect the $\Pr(z|\theta)/\Pr(z)$ component. Their dependence on the uncalibrated magnitude of $\Pr(x|\theta)/\Pr(x)$ results in our attack surpassing them throughout the power-error (TPR-FPR) curve. Calibration by $z$ tells us if the magnitude of $\Pr(x|\theta)/\Pr(x)$ is significant (compared to non-members). Another category of attacks (Carlini et al., 2022) also falter, missing the essential calibration of their test with population data. But, the weakness of these attacks is not limited to this. To provide a better comparison, let us first introduce an alternative method to compute our likelihood ratio equation 2. In the black-box setting, the divergence between the two (numerator and denominator) distributions in a MIA likelihood ratio is a stationary point when the model is queried on the differing points (Ye et al., 2023). This has been the practice in MIA attacks. So, the best strategy to maximize the | Method | RMIA | LiRA | Attack-R | Attack-P | Global | |--------|------|------|----------|----------|--------| | MIA Score | $\Pr_z \left( \frac{\Pr(\theta|x)}{\Pr(\theta|z)} \geq \gamma \right)$ | $\Pr(\theta|x)$ | $\Pr_{\theta'} \left( \frac{\Pr(x|\theta)}{\Pr(x|\theta')} \geq 1 \right)$ | $\Pr_z \left( \frac{\Pr(x|\theta)}{\Pr(x|z)} \geq 1 \right)$ | $\Pr(x|\theta)$ | Table 1: Computation of $\text{Score}_{\text{MIA}}(x; \theta)$ in different membership inference attacks (RMIA, this paper, versus LiRA (Carlini et al., 2022), Attack-R and Attack-P (Ye et al., 2022), and Global (Yeom et al., 2018)), where the notation $\overline{x}$ (for LiRA) represents the case where $x$ is not in the training set. The attack in all methods is $\text{MIA}(x; \theta) = 1_{\text{Score}_{\text{MIA}}(x; \theta) \geq \beta}$ based on the game in Definition 1. likelihood ratio in the black-box setting is to evaluate $f_\theta(x_{\text{features}})$ and $f_\theta(z_{\text{features}})$, where $f_\theta(.)$ is a classification model with parameters $\theta$. Thus, a direct way to compute equation (2) is the following: $$ LR_\theta(x, z) = \frac{\Pr(\theta|x)}{\Pr(\theta|z)} \approx \frac{\Pr(f_\theta(x), f_\theta(z)|x)}{\Pr(f_\theta(x), f_\theta(z)|z)}, \quad \text{"direct computation of LR"} $$ (6) where the numerator $\Pr(f_\theta(x), f_\theta(z)|x)$ can be computed empirically by training many reference models $\theta'_x$ that are trained on $x$ and the rest of the training data are randomly sampled from the population. Similarly, for the denominator we need to train many reference models $\theta'_z$. Following Ye et al. (2023); Carlini et al. (2022), we can compute these probabilities using Gaussian distribution, on output (logits) of model at class $x_{\text{label}}$ when evaluated on $x_{\text{features}}$, as detailed in Appendix A.12. We provide an empirical comparison between attack performance of our main computation (equation 4 using Bayes rule) and direct computations of the likelihood ratio in Appendix A.12. The results show that both computations of our method match when we use a large number of reference models (Figure 20). However, our main computation using the Bayes rule equation (4) dominates the direct computation of LR equation (5) when a few reference models are used (Figure 21). Given our equation (6), the LiRA test in Carlini et al. (2022) can be viewed as an average case of our test. Observe that the denominator in LiRA LR is the average case for our pairwise LR averaged over all $z$. This reduces the power of LiRA’s test. Also, as we show in Appendix A.12, a direct computation of LR requires a large number of reference models. Their LR numerator necessitates online training of IN reference models specific to each target query $x$, otherwise the attack performance is very low. Thus, our attack strictly dominates Carlini et al. (2022) throughout the power-error (TPR-FPR) curve, and the gap increases significantly when we reduce the computation budget for reference models (See Figure 6). The combination of a pairwise LR and its computation using the Bayesian approach results in our robust, high-power, and low-cost attack. Another way to interpret our test is by examining the relative distinguishability between $x$ and $z$ by comparing their probability ratios when evaluated using reference models versus their probability ratios under the target model. In other words, we contrast $\Pr(x|\theta)/\Pr(z|\theta)$ with $\Pr(z)/\Pr(x)$. If both points, $x$ and $z$, exhibit identical probability ratios when assessed against the target and reference models, they become indistinguishable, and any deviation from this is detected by the test. The strength of our test lies in its ability to detect subtle differences in these probability ratios stemming from inclusion of $x$ in the target training set. By repeatedly applying this RMIA test for numerous $z$ samples from the population, our membership inference attack gains a strong confidence in distinguishing members from non-members. In contrast, the prior work based on Homer et al. (2008) that solely depend on probability (or error) of the population data as characterized by Attack-P in (Ye et al., 2022), lack power (low TPR) due to their neglect of the $\Pr(z)/\Pr(x)$ component. Essentially, they operate under the inaccurate assumption that all data points are all alike, i.e., $\Pr(z) \approx \Pr(x)$. ### 3 Empirical Evaluation #### 3.1 Experimental Setup Our evaluation is aimed at comparing the proposed attack with prior state-of-the-art membership inference attacks. For a better comparison, we use the same setup as Carlini et al. (2022), in which for a given dataset, the adversary trains $k$ reference (shadow) models on training sets such that each sample $x \in \pi$ is contained in exactly half of the reference models’ training set. Here, we report the attack results on models trained with four different datasets, traditionally used for membership inference attack evaluations. For CIFAR-10 (He et al., 2016) (a traditional image classification dataset), we train a Wide ResNets (with depth 28 and width 2) to 92% test accuracy (for 100 epochs) on half of the dataset (25000 samples, chosen at random). For CIFAR-100 and CINIC-10 (as other image datasets), we follow the same process as for CIFAR-10 and train a wide ResNet on half of the dataset to get 67% and 77% test accuracy, respectively, surpassing the accuracy of models utilized in prior studies. We set the batch size to 256. We also include the result of attacks on Purchase-100 dataset (a tabular dataset of shopping records) (Shokri et al., 2017), where models are 4-layer MLP with layer units=[512, 256, 128, 64], trained on 25k samples for 50 epochs to obtain 83% test accuracy. We train our models using standard techniques to reduce over-fitting, including train-time augmentations, weight decay and early stopping. Exactly like Carlini et al. (2022), there are a number of simple augmentations for each training sample in image models, computed by... horizontally flipping and/or shifting the image by a few pixels. As a result, the train-test accuracy gap of our models is small (e.g. below 7% for CIFAR-10 models). We measure the performance of each attack using two underlying metrics: its true positive rate (TPR), and its false positive rate (FPR), over all member and non-member records of random target models. Then, we use the ROC curve to reflect the trade-off between the TPR and FPR of an attack, as we sweep over all possible values of threshold $\beta$ to build different FPR tolerance. The AUC (area under the ROC curve) score gives us the average success across all target samples and measures the overall strength of an attack. Inspired from previous discussions in [Carlini et al., 2022], we also consider TPR at very low FPRs. More precisely, we focus on TPR at 0% FPR, a metric that has seen limited usage in the literature. All samples in the population data are used as input queries. Hence, for each target model, half of queries are members and the other half are non-members. ### 3.2 Comparison of Different Attacks We compare the performance of RMIA with three recent effective attacks, namely Attack-R and Attack-P introduced in [Ye et al., 2022] and LiRA in [Carlini et al., 2022]. They have been shown to exhibit the best performance, compared to previous attacks, hence we do not bring the result of other attacks here. We can improve attacks by making multiple queries to the model. Therefore, in addition to querying the target model with query $x$, we also query on augmentations of $x$, obtained via simple mirror and shift operations. We apply the idea of augmented queries only on LiRA and RMIA, as the other two attacks do not originally support multiple queries. We evaluate attacks under two general settings: 1) The offline setting where the adversary only trains reference models (or OUT models) that do not contain any input query in their training set, and 2) The online setting where the adversary is given enough time and resource to train reference models (or IN models) using input queries. The Attack-R is inherently an offline attack and the Attack-P is independent of reference models. In [Carlini et al., 2022], the authors described how to convert LiRA to an offline attack that only works with OUT models. Likewise, we can restrict our attack to utilize OUT models for computing $\Pr(x)$ in the likelihood ratio of equation 4, rendering it as an offline attack. In Appendix [A.10.3] we also elaborate on improving the approximation of $\Pr(x)$ when dealing with a limited number of OUT models. The main questions we are going to answer in this section are the followings: 1. How do attacks perform well when the adversary is able to train and use only a couple of reference models? This is an essential issue when working with models that need lots of data and huge amount of computation and memory for their training. 2. Which attack provides the best performance in an offline scenario where the adversary cannot train any model on receiving an input query? This scenario plays an important role in making an MIA algorithm useful for a practical privacy risk analysis task, as the cost and the time of training new models for each query is highly restrictive in many real cases. 3. Which attack gives the best result when there is no limitation for training reference models? **Performance of Attacks under Limited Number of Reference Models.** Table 2 compares the result of attacks when we train a limited number of reference models, with CIFAR-10, CIFAR-100 and CINIC-10 datasets. We fix the total number of models used throughout the whole experiment such that the same set of models are used to infer the membership of all target samples. We believe such a constrained setting is more practical for privacy auditing tasks. When evaluating the offline version of LiRA and RMIA and also Attack-R, all reference models are OUT for each target sample. When the number of reference models is 1, we are only able to assess offline attacks. For online attacks (i.e. LiRA and RMIA), we train models in a way that half of them are IN and half are OUT for each target sample. We ignore to bring the result of the Attack-P for now, as it does not make use of reference models. The proposed RMIA can demonstrate its strict dominance using a few reference models across all datasets. For instance, with 2 CIFAR-10 models, it achieves around 10% higher AUC than both Attack-R and LiRA and still gains at least 110% better TPR at zero FPR. Surprisingly, the offline RMIA demonstrates an enhanced level of superiority over other attacks, including the online LiRA algorithm. For example, with 4 CIFAR-10 models, it has at least 6% higher AUC and a stunning 3x improvement for TPR at zero FPR over online LiRA. In the extreme case of using one reference model, RMIA shows at least 26% higher AUC and 100% more Table 2: Performance of attacks when a limited number of reference models are used. Separate models are trained with CIFAR-10, CIFAR-100 and CINIC-10 datasets. For LiRA (Carlini et al., 2022) and RMIA, we use 18 augmented queries, and for Attack-R (Ye et al., 2022), we use 1 query. For RMIA, we use $\gamma = 2$. Results are averaged over 10 random target models. TPR at low FPRs than LiRA over all datasets. On the other hand, although the Attack-R shows a relatively high AUC, but it can never get a good TPR at lower FPRs. The reason is that it tries to predict the membership of a target sample just through comparing its loss in various reference models which is restrictive when we do not have enough models. Since RMIA takes the advantage of two information sources (reference records and models), it can better tolerate with fewer models. We illustrate the ROC of these attacks on a random target model in Figure 1 in which the number of reference models is limited to 1. Figure 1a and 1b are obtained with models trained on CIFAR-10 and CIFAR-100, respectively. In both ROCs, the Attack-R and LiRA lag behind RMIA across nearly all FPR values. In Appendix A.2, we show the ROC of attacks using various number of reference models trained on different datasets. Furthermore, for a deeper understanding of attacks’ behaviour, we examine the variation in MIA scores among different attacks in Appendix A.11. Evaluation of Attacks with Offline Reference Models. Now, we compare the performance of offline attacks and also examine how they work with using more augmented queries. Note that Attack-P and Attack-R has no result for multiple queries. In this experiment, we use 127 OUT models for each sample. As shown in the Offline column of Table 3, RMIA outperforms LiRA by 28% higher AUC and a remarkable 3 times better TPR at zero FPR (when comparing the best result of two attacks). As we increase queries, RMIA gets a better result (for example, a 4x improvement in TPR at zero FPR and also about 4.6% higher AUC as queries go from 1 to 50), while LiRA cannot benefit from the advantage of more queries to improve its AUC. Note that we use the same technique to generate augmentations, as proposed in Carlini et al., 2022. The Attack-P fails to achieve a good TPR at low FPRs. Specifically, it may misclassify a typical non-member sample, one with a high prediction probability in reference models, as a member and conversely, mistakenly classify an atypical member sample, one with a low prediction probability in reference models, as a non-member. Additionally, the Attack-R may wrongly label a high-quality member sample, one with a higher prediction probability compared to other samples in the population, as a non-member solely because of its higher probability in reference models. In contrast, RMIA is designed to overcome these limitations by considering both the characteristics of the target sample within reference models and its relative probability among other reference records. With no additional queries, RMIA presents a clear advantage over Attack-R (with 6.7% higher AUC and 116% more TPR at zero FPR) and the performance gap between two attacks widens with using more queries. Taking the high cost of training more models into account even in the case of offline scenario, it is fascinating that the result of offline RMIA with a few models, shown in Table 2, is close in terms of AUC to its result with 127 models. As shown in Appendix A.3, a consistent pattern of results is observed in offline attacks in the presence of data distribution shift, i.e. when the target models are trained on datasets different from those used for the reference models. Evaluation of Attacks with Abundant Reference Models. In the Online column of Table 3, we show the result of LiRA and RMIA when all 254 IN and OUT reference models are available to the adversary. We also present the impact of using different number of augmented queries on the Table 3: Performance of attacks when we use different number of augmented queries for LiRA (Carlini et al., 2022) and RMIA. We evaluate attacks in two different settings, shown in Online and Offline columns. In the online setting, we use 254 models where half of them are IN models and half are OUT (for each sample). The offline setting uses only 127 OUT models. Both Attack-P and Attack-R (Ye et al., 2022) do not originally support multiple augmented queries. The Attack-P does not work with reference models, thus we consider it offline. Models are trained with CIFAR-10. For RMIA, we use $\gamma = 2$. Results are averaged over 10 random target models. | # Queries | Attack | Online | Offline | |-----------|--------|--------|---------| | | | AUC | TPR@FPR | AUC | TPR@FPR | | | | 0.01% | 0.0% | 0.01% | 0.0% | | 1 | Attack-R | - | - | 64.41 ± 0.41 | 1.52 ± 0.80 | | | LiRA | 68.92 ± 0.42 | 1.78 ± 0.92 | 58.19 ± 0.33 | 0.01 ± 0.00 | | | RMIA | 69.15 ± 0.35 | 2.26 ± 1.80 | 56.41 ± 0.41 | 1.46 ± 0.28 | | 2 | LiRA | 71.28 ± 0.46 | 2.83 ± 1.73 | 55.77 ± 0.46 | 1.16 ± 0.59 | | | RMIA | 71.46 ± 0.43 | 3.69 ± 2.55 | 71.06 ± 0.39 | 3.64 ± 2.46 | | 18 | LiRA | 72.04 ± 0.47 | 3.39 ± 2.01 | 55.18 ± 0.37 | 1.37 ± 0.72 | | | RMIA | 72.25 ± 0.46 | 4.31 ± 3.15 | 71.71 ± 0.43 | 4.18 ± 3.14 | | 50 | LiRA | 72.26 ± 0.47 | 3.54 ± 2.19 | 55.00 ± 0.36 | 1.52 ± 0.75 | | | RMIA | 72.51 ± 0.46 | 4.47 ± 3.25 | 71.95 ± 0.44 | 4.39 ± 3.22 | Table 4: Performance of different attacks on CIFAR-10, CIFAR-100, CINIC-10 and Purchase-100 datasets using 254 reference models. For LiRA (Carlini et al., 2022) and RMIA, We use 18 augmented queries, and for Attack-P and Attack-R (Ye et al., 2022), we use 1 query. For RMIA, we use $\gamma = 2$. Results are averaged over 10 random target models. | Attack | CIFAR-10 | CIFAR-100 | CINIC-10 | Purchase-100 | |--------|----------|-----------|----------|--------------| | | AUC | TPR@FPR | AUC | TPR@FPR | | | 0.01% | 0.0% | 0.01% | 0.0% | | Attack-P | 58.19 ± 0.33 | 0.01 ± 0.00 | 75.91 ± 0.36 | 0.01 ± 0.00 | | Attack-R | 64.41 ± 0.41 | 1.52 ± 0.80 | 83.37 ± 0.24 | 4.80 ± 2.59 | | LiRA | 72.04 ± 0.47 | 3.19 ± 2.01 | 91.01 ± 0.14 | 11.35 ± 7.78 | | RMIA | 72.25 ± 0.46 | 3.31 ± 3.15 | 91.01 ± 0.14 | 11.35 ± 7.78 | | LiRA (Offline) | 55.18 ± 0.37 | 1.37 ± 0.72 | 75.78 ± 0.33 | 2.53 ± 1.13 | | RMIA (Offline) | 71.71 ± 0.43 | 4.18 ± 3.14 | 90.57 ± 0.15 | 11.45 ± 6.16 | The performance of two attacks. Compared with LiRA, RMIA always has a slightly higher AUC and at least 48% better TPR at zero FPR. Note that in this case, even minor AUC improvements are particularly significant when closely approaching the true leakage of the training algorithm through hundreds of models. Both LiRA and RMIA work better with increasing augmented queries, e.g. around 2x improvement in TPR@FPR when going from 1 query to 50 queries. Table 4 presents the performance of all attacks on models trained with four different datasets. In this experiment, we use all 254 reference models (for offline attacks, we use half of them). We observe roughly the same order of supremacy between attacks in all datasets. RMIA works slightly better than LiRA with respect to AUC, except for CIFAR-100. But, its TPR at zero FPR is improved considerably (by up to 50%) in various datasets. In addition, our offline attack consistently outperforms offline LiRA by at least 20% in AUC and 300% in TPR@FPR across all datasets. In fact, it demonstrates a performance comparable to online attacks which is quite remarkable, when we take into account the training costs associated with online models. In the appendix, we study the impact of using different network architectures (Appendix A.4), DP-SGD (Appendix A.5) and also other ML algorithms (Appendix A.6) on the performance of attacks. In addition, we discuss how the number of reference records and also the selection of $\gamma$ and $\beta$ parameters affect RMIA in Appendix A.8 and A.9 respectively. 4 CONCLUSIONS We design novel membership inference attacks, focusing on maximizing the distinguishability between members of a model’s training set and random samples from the population, while minimizing computation costs. Our comprehensive evaluations across different settings shows the clear advantages of RMIA over the prior state-of-the-art attacks (Ye et al., 2022; Carlini et al., 2022), regardless of the dataset, number and quality of reference models, and training algorithms. REFERENCES Martín Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 23rd ACM SIGSAC Conference on Computer and Communications Security (CCS’16), pp. 308–318, 2016. Michael Backes, Pascal Berrang, Mathias Humbert, and Praveen Manoharan. Membership privacy in microrna-based studies. In Proceedings of the 23rd ACM SIGSAC Conference on Computer and Communications Security (CCS’16), pp. 319–330, 2016. Kunal Banerjee, Vishak C. Prasad, Rishi Raj Gupta, Karthik Vyas, Anushree H, and Biswajit Mishra. Exploring alternatives to softmax function. In Proceedings of the 2nd International Conference on Deep Learning Theory and Applications (DeLTA’21), pp. 81–86, 2021. Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural network. In International conference on machine learning, pp. 1613–1622. PMLR, 2015. Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song. The secret sharer: Evaluating and testing unintended memorization in neural networks. In 28th USENIX Security Symposium (USENIX Security’19), pp. 267–284, 2019. Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, and et al. Extracting training data from large language models. In 30th USENIX Security Symposium (USENIX Security’21), 2021. Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, and Florian Tramer. Membership inference attacks from first principles. In IEEE Symposium on Security and Privacy (S&P’22), pp. 1897–1914, 2022. Dingfan Chen, Ning Yu, and Mario Fritz. Relaxloss: Defending membership inference attacks without losing utility. In Proceedings of the 10th International Conference on Learning Representations (ICLR’22), 2022. Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, and Yang Zhang. When machine unlearning jeopardizes privacy. In Proceedings of the 28th ACM SIGSAC Conference on Computer and Communications Security (CCS ’21), pp. 896–911, 2021. Christopher A. Choquette-Choo, Florian Tramer, Nicholas Carlini, and Nicolas Papernot. Label-only membership inference attacks. In Proceedings of the 38th International Conference on Machine Learning (ICML’21), pp. 1964–1974, 2021. Alexandre De Brebiisson and Pascal Vincent. An exploration of softmax alternatives belonging to the spherical loss family. In Proceedings of the 4th International Conference on Learning Representations (ICLR’16), 2016. Cynthia Dwork. Differential privacy. In Proceedings of 33rd International Colloquium in Automata, Languages and Programming (ICALP’06), pp. 1–12, 2006. Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4-7, 2006. Proceedings 3, pp. 265–284. Springer, 2006. Cynthia Dwork, Adam Smith, Thomas Steinke, Jonathan Ullman, and Salil Vadhan. Robust traceability from trace amounts. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, pp. 650–669. IEEE, 2015. Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (CCS’15), pp. 1322–1333, 2015. Karan Ganju, Qi Wang, Wei Yang, Carl A Gunter, and Nikita Borisov. Property inference attacks on fully connected neural networks using permutation invariant representations. In Proceedings of the 25th ACM SIGSAC Conference on Computer and Communications Security (CCS’18), pp. 619–633, 2018.
IJBsKYXaH4
Regarding the measures COV and MAT. If we assume that a molecule has, e.g., 3 major conformers which are all very close in RMSD. Wouldn’t a model that samples always only one conformer achieve a COV of 1, even though it has never generated the other 2? Also, the definition of MAT seems odd, is there sum over S_r and maybe a min() missing?
MOLECULAR CONFORMATION GENERATION VIA SHIFTING SCORES Anonymous authors Paper under double-blind review ABSTRACT Molecular conformation generation, a critical aspect of computational chemistry, involves producing the three-dimensional conformer geometry for a given molecule. Generating molecular conformation via diffusion requires learning to reverse a noising process. Diffusion on inter-atomic distances instead of conformation preserves SE(3)-equivalence and shows superior performance compared to alternative techniques, whereas related generative modelings are predominantly based upon heuristical assumptions. In response to this, we propose a novel molecular conformation generation approach driven by the observation that the disintegration of a molecule can be viewed as casting increasing force fields to its composing atoms, such that the distribution of the change of inter-atomic distance shifts from Gaussian to Maxwell-Boltzmann distribution. The corresponding generative modeling ensures a feasible inter-atomic distance geometry and exhibits time reversibility. Experimental results on molecular datasets demonstrate the advantages of the proposed shifting distribution compared to the state-of-the-art. 1 INTRODUCTION The molecular conformation generation task constitutes a crucial and enabling aspect of numerous research pursuits, particularly in the study of molecular structures and their potential energy landscapes (Strodel, 2021). Traditional computational methods for this task rely on optimizing the free energy grounded on Schrodinger equation or density functional theory or its approximations (Griffiths & Schroeter, 2018; Tsuchihita & Hirono, 1997; Labute, 2010), failing to find a good balance between complexity and quality. Recently, machine learning has emerged as a powerful and efficient tool to identify more stable and diverse conformations across an expanded chemical space (Xu et al., 2021b; Ganea et al., 2021; Xu et al., Jing et al.). However, such novel approaches give rise to some new challenges. One of the most significant challenges is incorporating the roto-translational equivariance (SE(3)-equivariance) intrinsic to the generation process. Recent works employ SE(3)-equivariant molecular properties as proxies to render the model invariance. For instance, some studies focus on predicting torsional angles (Jing et al., Ganea et al., 2021) or inter-atomic distances (Simm & Hernández-Lobato, 2020; Xu et al., Ganea et al., 2021), with the final conformation assembled through post-processing. Besides, Uni-Mol (Zhou et al., 2023a) predicts the delta coordinate positions based on atom pair representation to update coordinates. Other works leverage inter-atomic distances to directly predict coordinates using generative models (Xu et al., Shi et al., 2021; Xu et al., 2021b; Zhu et al.). In parallel with these efforts, researchers have developed SE(3)-equivariant graph neural networks (GNNs) to better characterize the geometry and topology of geometric graphs (Schütt et al., 2017; Satorras et al., 2021; Han et al., 2022). These GNNs serve as effective tools or backbones for molecular conformation generation (Jing et al., Ganea et al., 2021; Xu et al., Shi et al., 2021; Xu et al., 2021b; Hoogeboom et al., 2022). Following the previous works (Xu et al., Shi et al., 2021; Xu et al., 2021b), our approach also seeks to encode SE(3)-equivariance from an inter-atomic distance perspective. To the best of our knowledge, existing works do not yet provide a systemic analysis of distance, often relying on common or heuristic Gaussian assumption on distance changes (Xu et al., 2021b). In this study, we conduct a thorough analysis of inter-atomic distances, drawing inspiration from physical atom motion phenomena. Specifically, we investigate the disintegration process of molecular structures... Figure 1: Demonstration of the diffusion process of SDDiff. As the Gaussian perturbation level on atom coordinates increases, the distribution of inter-atomic distances shifts from Gaussian to Maxwell-Boltzmann, which SDDiff learns to reverse. and aim to learn how to reverse these processes for generating conformations. To this end, the disintegration of molecules can be viewed as being caused by the introduction of gradually increasing levels of perturbing force fields. We postulate that atoms within a molecule exhibit Brownian motion (Gaussian) under relatively small perturbing forces. When the forces are considerably large, chemical structures are disrupted, and the atoms are able to move without restrictions. In this stage, the atom speeds follow a Maxwell-Boltzman distribution. Naturally, this can be connected to the distance distribution, in accordance with the escalation of perturbation intensity. See Fig. 1 for an overview. We thus put forth a precise estimation of the perturbed distance distribution through a closed-form shifting score function. Further, we propose a novel diffusion-based model named SDDiff (shifting distance diffusion) to reverse the force field to recover molecule conformations, leading to superior performance. Our main contributions are: • Inspired by molecule thermodynamics, we show that under the Gaussian perturbation kernel on molecular conformation, the distribution of relative speeds and the change of inter-atomic distances shift from Gaussian to Maxwell-Boltzmann distribution. • We propose a diffusion-based generative model, SDDiff, with a novel and closed-form shifting score kernel, with the mathematical support and empirical verification of its correctness. • Our method achieves state-of-the-art performance on two molecular conformation generation benchmarks, GEOM-Drugs (Axelrod & Gómez-Bombarelli, 2022) and GEOM-QM9 (Ramanakrishnan et al., 2014). 2 RELATED WORK Molecular conformation generation. Learning techniques are increasingly equipped for molecular conformation generation. A shallow trial is GeoMol (Ganea et al., 2021), which predicts local 3D configurations and assembles them with heuristic rules. Instead, conformations can be holistically sampled via modelings of either inter-atomic distance (Shi et al., 2021; Simm & Hernández-Lobato, 2020) or atom coordinates (Xu et al.; Zhu et al.). Recently, a rising interest has been observed in diffusion-based approaches (Shi et al., 2021; Xu et al., 2021b; Jing et al.), where the most related works to ours are ConfGF (Shi et al., 2021) and GeoDiff (Xu et al., 2021b). ConfGF perturbs the distance and estimates the corresponding score, which is subsequently converted to the coordinate score via chain rule. However, such a process may result in infeasible 3D geometry. GeoDiff perturbs coordinates instead and introduces an SE(3)-equivariant Markov kernel transiting the coordinate diffusion process to the distance process. However, this model’s design is based on the assumption that the perturbed distance follows a Gaussian distribution. This heuristic assumption can lead to mismatches and inaccuracy. Diffusion-based generative models. Denosing diffusion probabilistic models (DDPM) (Ho et al., 2020) delineates a Markov chain of diffusion steps to add random noise to data and subsequently learns to invert the diffusion process for generating desired data samples. Analogous to DDPM, the score matching with Langevin dynamics (SMLD) models (Song & Ermon, 2019, 2020) train noise conditional score networks (NCSN) that approximate the score function of the dataset and apply the stochastic gradient Langevin dynamics to approximate the data distribution. The above two models can be unified under the context of stochastic differential equations (SDEs) (Song et al., 2020b). The denoising diffusion implicit model (DDIM) (Song et al., 2020a) has a controllable sampling stochasticity, allowing the generation of higher-quality samples with fewer steps. The latent diffusion model (LDM) (Rombach et al., 2022) is another accelerated sampler by implementing the diffusion process in the latent space. SE(3) Neural Networks. The Euclidean group, denoted as SE(3) or E(3) when including reflections, represents a group of symmetries in 3D translation and rotation. Due to the geometric symmetry nature of molecules, incorporating this property in feature backbones is essential. One typical line of research is related to GNNs. SchNet (Schütt et al., 2017) is an E(n)-invariant network for modeling quantum interactions in molecules. E(n)-Equivariant Graph Neural Networks (EGNNs) (Satorras et al., 2021) is an E(n)-equivariant GNN, which does not rely on computationally expensive higher-order representations in intermediate layers. A hierarchy-based GNN named Equivariant Hierarchy-based Graph Networks (EGHNs) (Han et al., 2022) can increase the expressivity of message passing, which is also guaranteed to be E(3)-equivariant to meet the physical symmetry. Another related line of research is not restricted to the message-passing paradigm (Gilmer et al., 2017). Some existing works (Thomas et al., 2018; Fuchs et al., 2020) utilize the spherical harmonics to compute a basis for the transformations, which preserve SE(3)-equivariance. 3 BACKGROUND 3.1 MOLECULAR CONFORMATION GENERATION The generation of molecular conformation can be regarded as a generative problem conditioned on a molecular graph. For a given molecular graph, it is required to draw independent and identically distributed (i.i.d.) samples from the conditional probability distribution $p(C|G)$, in which $p$ adheres to the underlying Boltzmann distribution (Noé et al., 2019), while $C$ and $G$ signify the conformation and formula of the molecule, respectively. Formally, each molecule is depicted as an undirected graph $G = (V, E)$, with $V$ representing the set of atoms within the molecule and $E$ denoting the set of inter-atomic chemical bonds, as well as the corresponding node features $h_u \in \mathbb{R}^f$, $\forall u \in V$ and edge features $e_{uv} \in \mathbb{R}^{f'}$, $\forall (u, v) \in E$ representing atom types, formal charges, bond types, etc. To simplify the notation, the set of atoms $V$ in 3D Euclidean space is expressed as $C = [c_1, c_2, \cdots, c_n] \in \mathbb{R}^{n \times 3}$, and the 3D distance between nodes $u$ and $v$ is denoted as $d_{uv} = ||c_u - c_v||$. A generative model $p_\theta(C|G)$ is developed to approximate the Boltzmann distribution. 3.2 EQUIVARINANCE IN MOLECULAR CONFORMATION Equivariance under translation and rotation (SE(3) groups) exhibits multidisciplinary relevance in a variety of physical systems, hence plays a central role when modeling and analyzing 3D geometry (Thomas et al., 2018; Weiler et al., 2018; Chmiela et al., 2019; Fuchs et al., 2020; Miller et al., 2020; Simm et al., 2020; Batzner et al., 2022). Mathematically, a model $s_\theta$ is said to be equivariance with respect to SE(3) group if $s_\theta(T_f(x)) = T_g(s_\theta(x))$ for any transformation $f, g \in$ SE(3). Utilizing conformational representations directly to achieve equivariance presents challenges in accurately capturing the chemical interactions between atoms. Consequently, this approach may result in the generation of molecular structures with inaccuracies and poor configurations. An alternative approach is to use the inter-atomic distance that is naturally equivariant to SE(3) groups (Shi et al., 2021; Xu et al., 2021b; Gasteiger et al., 2020), which will be further introduced in Sec. 4.2. 3.3 Learning via Score Matching Langevin dynamics. Given a fixed step size \(0 < \epsilon \ll 1\), take \(x_0 \sim \pi(x)\) for some prior distribution and use Euler–Maruyama method for simulating the Langevin dynamics \[ x_t = x_{t-1} + \frac{\epsilon}{2} \nabla_x \log p(x_{t-1}) + \sqrt{\epsilon} z_t, \] where \(z_t \sim \mathcal{N}(0, I)\). As \(t \to \infty\), \(x_t\) can be considered as a sample draw from \(p(x)\) under some regularity conditions (Welling & Teh, 2011). This implies that if we know the score function \(\nabla_x \log p(x)\), we can use Langevin dynamics to sample from \(p(x)\). Denoising score matching. The process of denoising score matching (Vincent, 2011) involves the perturbation of data \(x\) in accordance with a predetermined perturbing kernel, denoted by \(q_\sigma(\tilde{x} | x)\). The objective \(s_\theta\) that minimize the following: \[ \frac{1}{2} \mathbb{E}_{q_\sigma(\tilde{x} | x)p_{\text{data}}(x)} \left[ \| s_\theta(\tilde{x}) - \nabla_{\tilde{x}} \log q_\sigma(\tilde{x} | x) \|_2^2 \right] \] satisfies \(s_\theta(x) = \nabla_x \log q_\sigma(x)\) almost surely (Vincent, 2011). This implies that to train a denoising model \(s_\theta\), we can set the loss functions to be \[ \mathcal{L}(s_\theta; \{\sigma_i\}_{i=1}^L) \triangleq \frac{1}{L} \sum_{i=1}^L \lambda(\sigma_i) \ell(s_\theta; \sigma_i) \] \[ \ell(s_\theta; \sigma) \triangleq \frac{1}{2} \mathbb{E}_{p_{\text{data}}(x)} \mathbb{E}_{\tilde{x} \sim q_\sigma(\tilde{x} | x)} \| s_\theta(\tilde{x}, \sigma) - \nabla_{\tilde{x}} \log q_\sigma(\tilde{x} | x) \|_2^2. \] where \(\lambda(\sigma) \propto 1 / \mathbb{E} \left[ \| \nabla_{\tilde{x}} \log p_\sigma(\tilde{x} | x) \|_2^2 \right]\) is a reweighting coefficient so that the magnitude order of the loss function does not depend on \(\sigma\) (Song et al., 2020b). After obtaining a model \(s_\theta(x) \approx \nabla_x \log q_\sigma(x)\), following the (annealed) Langevin dynamics (Song & Ermon, 2019), one can draw sample from \(p_{\text{data}}(x)\) by recursive computing \(\tilde{x}_t = \tilde{x}_{t-1} + \frac{\epsilon}{2} s_\theta(x_{t-1}, \sigma_t) + \sqrt{\alpha_t} z_t\), where \(\alpha_t = \epsilon \cdot \sigma_t^2 / \sigma_L^2\). Maxwell-Boltzmann distribution. In the domain of statistical mechanics, the Maxwell-Boltzmann (MB) distribution serves as a model for delineating the velocities of particles within idealized gaseous systems. These systems are characterized by freely moving particles within a stationary enclosure, where interactions among the entities are negligible apart from momentary collisions. From a mathematical perspective, the MB distribution is the \(\chi\)-distribution with three degrees of freedom (Young et al., 2008). The probability density function of MB(\(\sigma\)) is given by \(f_\sigma(x) = \sqrt{\frac{2}{\pi}} \frac{x^2 e^{-x^2/(2\sigma^2)}}{\sigma^3}\) with support \(\mathbb{R}_{++}\). 4 Methodology 4.1 Modeling the Distribution of Inter-Atomic Distances In the present investigation, molecular disintegration is facilitated by the application of progressively intensified perturbation force fields. Upon perturbing a single atom, adjacent atoms experience a consequent force, arising from the chemical bonds interconnecting them with the perturbed atom. In case when a relatively minor perturbative force field is employed, chemical bonds remain unbroken, thereby restricting atomic motions. This observation leads us to hypothesize that individual atoms exhibit Brownian motions under such conditions. Contrarily, when a sufficiently potent force field is imposed, chemical bonds are destroyed, permitting atoms to undergo virtually uninhibited motion with the bare occurrence of collisions. We further hypothesize that the relative speed between any two atoms adheres to the Maxwell-Boltzmann (MB) distribution. Focusing on the inter-atomic distances \(d\) within a molecule, we establish that the marginal distribution of perturbed inter-atomic distances \(\tilde{d}\), given \(d\), is equivalent to the distribution of relative velocities among the atoms. Specifically, let \(\sigma_t\) measure the perturbing force fields at time \(t\) and \(\{\sigma_t\}_{t=0}^T\) is an increasing non-negative sequence. Then, \[ p_{\sigma_0}(\tilde{d}|d) = p_{\sigma_0}(v) = \mathcal{N}(\tilde{d}|d, 2\sigma_0^2 I), \quad p_{\sigma_T}(\tilde{d}|d) = p_{\sigma_T}(v) = \text{MB}(\sqrt{2\sigma_T}). \] Figure 2: In the investigation of perturbed distance distributions resulting from the introduction of Gaussian noise to molecular conformation, a transition from Gaussian to MB is observed as the noise level escalates. The perturbation’s intensity is denoted by $\sigma$. Within the graphical representation, the orange curve delineates the pdf of $\mathcal{N}(0, 2\sigma^2)$, the green curve corresponds to the pdf of $\text{MB}(\sqrt{2}\sigma)$, and the blue dotted curve represents the pdf of $p(d|\tilde{d})$. For intermediate perturbing forces, we set $p_{\sigma_t}(\tilde{d}|d) \propto \tilde{d}^2 f_{\sigma_t}(\tilde{d}, d) e^{-\frac{(\tilde{d}-d)^2}{4\sigma_t^2}}$, where several constrains are on $f_{\sigma}$. For a smoothly shifting perturbing force field, we require $f_{\sigma}(\tilde{d}, d)$ to be smooth with respect to $\sigma$, $\tilde{d}$ and $d$. To make the limiting perturbing force field be Gaussian and MB, we require $\lim_{\sigma \to 0} f_{\sigma} = 0$ and $\lim_{\sigma \to \infty} f_{\sigma} = 1$. Thus, we have (note that when $\sigma_T$ is sufficiently large, $\tilde{d} - d \approx \tilde{d}$) $$p_{\sigma_0}(\tilde{d}|d) \propto e^{-\frac{(\tilde{d}-d)^2}{4\sigma_0^2}} \propto \mathcal{N}(\tilde{d}|d, 2\sigma_0^2 I)$$ (6a) $$p_{\sigma_T}(\tilde{d}|d) \propto \tilde{d}^2 e^{-\frac{(\tilde{d}-d)^2}{4\sigma_T^2}} \propto \text{MB}(\sqrt{2}\sigma_T)$$ (6b) If we take $f_{\sigma}(\tilde{d}, d) = 1 - e^{-\sigma/d}$, $$\nabla_{\tilde{d}} \log q_{\sigma}(\tilde{d} | d) = \left(1 - e^{-\sigma/d}\right) \frac{2}{\tilde{d}} - \frac{\tilde{d} - d}{2\sigma^2}$$ (7) We can simply use a Gaussian kernel as an approximation of perturbing force fields acting on the molecule conformation, i.e., $p_{\sigma}(\tilde{C}|C) = \mathcal{N}(\tilde{C}|C, \sigma^2 I)$, for $C \in \mathbb{R}^{n \times 3}$, so that the limiting distributions of atoms’ speed and conditional perturbed inter-atomic distance are Gaussian and MB distributions. This is because $$\tilde{C}_u = C_u + z_u \quad \tilde{C}_v = C_v + z_v \quad \text{where } z_u, z_v \sim \mathcal{N}(0, \sigma^2 I)$$ $$\tilde{d}_{uv} = \|z + C_u - C_v\| \quad (z = z_u - z_v \sim \mathcal{N}(0, 2\sigma^2 I))$$ $$= \|C_u - C_v\| + \|z + C_u - C_v\| - \|C_u - C_v\|$$ $$= d_{uv} + \frac{2z^\top(C_u - C_v) + \|z\|^2}{\|z + C_u - C_v\| + \|C_u - C_v\|}$$ When $\sigma$ is sufficiently small, $\tilde{d}_{uv} \approx d_{uv} + \frac{2z^\top(C_u - C_v)}{2\|C_u - C_v\|} = d_{uv} + \hat{z}$, where $\hat{z} \sim \mathcal{N}(0, 2\sigma^2)$. When $\sigma$ is sufficiently large, $\tilde{d}_{uv} \approx d_{uv} + \frac{\|z\|^2}{\|z + C_u - C_v\|} \approx \|z\|$, where $\|z\| \sim \text{MB}(\sqrt{2}\sigma)$. For a comprehensive elucidation of intermediary mathematical procedures, we direct the readers to Appendix A. We conduct experiments to verify the above mathematical derivation. In the conducted experiments, Gaussian perturbations with varying levels of variation are introduced to molecular conformations, i.e., $p(\tilde{C}|C) = \mathcal{N}(0, \sigma^2 I)$, for $C \in \mathbb{R}^{n \times 3}$, and the marginal distributions of the difference in inter-atomic distances before and after perturbation are examined. The resultant observations can be seen in Fig. 2 and 3. 4.2 Modeling Conformations We model the inter-atom distances instead of the conformation for equivariance as discussed in Sec. 3.2. Consider molecules formed by $n$ atoms, where $n \geq 5$. Given any $C \in \mathbb{R}^{n \times 3}/\text{SE}(3)$, let Figure 3: Distribution approximation. The actual pdf \( p_\sigma(\tilde{d} - d | d = \text{const}) \) is illustrated by the orange curve, whereas the blue dotted curve signifies the proposed approximated pdf. \( d(\cdot) : \mathbb{R}^{n \times 3}/\text{SE}(3) \to \mathbb{D} \) be the mapping from conformations to all inter-atomic distances, where \( \mathbb{D} := \text{image}(d) \). Hence, \( \mathbb{R}^{n \times 3}/\text{SE}(3) \) and \( \mathbb{D} \) are isomorphisms since to ascertain the relative position of a particular point, it is merely necessary to determine its distances from 4 other non-coplanar distinct points. We use \( d_{ij} \) to denote the entry \((i, j)\) of the adjacent matrix and we have, by slight abuse of notations \[ \nabla_{\tilde{C}} \log q_\sigma(\tilde{C}|C) = \frac{\partial}{\partial C} \log q_\sigma(\tilde{C}, d(\tilde{C})|C, d(C)) \\ = \sum_{i,j} \frac{\partial d_{ij}(\tilde{C})}{\partial \tilde{C}} \frac{\partial}{\partial d_{ij}(\tilde{C})} \log q_\sigma(d(\tilde{C})|d(C)) \quad (\text{almost surely}) \\ = \sum_{i,j} \frac{\partial \tilde{d}_{ij}}{\partial \tilde{C}} \nabla_{\tilde{d}_{ij}} \log q_\sigma(\tilde{d}|d) \] The above property also holds for \( \tilde{d}(\cdot) \) that maps the conformation to a partial distance vector where each atom is associated with at least 4 distances. A previous work (Shi et al., 2021) showed that for any \( s_\theta(\tilde{d}) \approx \nabla_{\tilde{d}} \log q_\sigma(\tilde{d}|d) \) as a function of the perturbed inter-atomic distance \( \tilde{d} \), the scoring network \( s_\theta \) is equivariant w.r.t. SE(3). By Eq. [5], [4], [8c] and [7], the denoising score matching objective for conformations is \[ L \left( \theta; \{\sigma_i\}_{i=1}^L \right) \triangleq \frac{1}{L} \sum_{i=1}^L \lambda(\sigma_i) \ell(\theta; \sigma_i) \] \[ \ell(\theta; \sigma) = \frac{1}{2} \mathbb{E}_{p_{\text{data}}(d)} \mathbb{E}_{p_\sigma(\tilde{d}|d)} \left\| s_\theta(\tilde{d}, \sigma) - \frac{\partial \tilde{d}}{\partial C} \left[ \left( 1 - e^{-\sigma/d} \right) \frac{2}{d} - \frac{\tilde{d} - d}{2\sigma^2} \right] \right\|^2_2 \] Note that \( \nabla_{\tilde{C}} \log q_\sigma(\tilde{C} | C) \neq -\frac{\tilde{C} - C}{\sigma^2} \) since \( \tilde{C}, C \in \mathbb{R}^{n \times 3}/\text{SE}(3) \) and the probability density function is different from that in \( \mathbb{R}^{n \times 3} \). Take \( \lambda(\sigma_i) = \sigma_i^2 \), \( \lambda(\sigma_i) \ell(\theta; \sigma_i) \propto 1 \) for any \( \sigma_i \). Thus, the loss magnitude order of the loss function does not depend on the specific selection of \( \sigma_i \). 4.3 Network for modeling conformation score The network employed for the purpose of modeling \( s_\theta \) must adhere to two specific criteria which are delineated in Sec. 4.2. For simplification, we omit the model’s parameter of molecular graph \( G \). SE(3) equivariance. It is imperative that the network abstains from utilizing molecular conformation directly as input; rather, it should incorporate inter-atomic distance to achieve SE(3) equivariance. The employment of perturbed distance as a means to directly forecast the conformation score necessitates a domain transition, thereby augmenting the complexity of the learning process. Thus, following the parametrization of the conformation score as discussed in Sec. 4.2, a generative model for estimating the score of distances is formulated, followed by the application of the chain rule to facilitate the conversion of distance scores into their corresponding values for conformation scores. Isomorphisms. Each individual atom must be associated with a minimum of four distances, in order to establish isomorphisms between \( C \in \mathbb{R}^{n \times 3}/\text{SE}(3) \) (representing conformation space) and \( \mathbb{D} \) (signifying feasible inter-atomic distance space). On the other hand, correlating an atom with an excessive number of distances exacerbates the challenge for the model to generate a feasible $d$. The underlying reason for this complication is the disparity in cardinal numbers of $\mathbb{R}^{n \times 3}/\text{SE}(3)$ and $\mathbb{D}$. $\mathbb{D}$ is a subset of $\mathbb{R}_+^m$, where $m = \binom{n}{2}$ is the number of edges in complete graph induced by the molecule. For a more detailed illustration, we refer readers to Appendix B. As a result, we connect the three-hop neighborhood in each chemical molecule so that almost every atom in a molecule is connected with at least four other atoms. Following GeoDiff (Xu et al., 2021b), we adapt a similar network for modeling $s_\theta$. Given an input graph $G$, the Message Passing Neural Networks (MPNN) (Gilmer et al., 2017) is adopted as $s_\theta$, which computes node embeddings $h_v^{(t)} \in \mathbb{R}^f, \forall v \in V$ with $T$ layers of iterative message passing: $$h_u^{(t+1)} = \psi \left( h_u^{(t)}, \sum_{v \in N_u} h_v^{(t)} \cdot \phi(e_{uv}, d_{uv}) \right)$$ for each $t \in [0, T - 1]$, where $N_u = \{v \in V | (u, v) \in E\}$, while $\psi$ and $\phi$ are neural networks, e.g. implemented using multilayer perceptrons (MLPs). Note that the node features, distances and edge features are input into $s_\theta$ as initial embeddings when $t = 0$, but we only keep the distance $d$ in the above sections as the input of $s_\theta$ for notation simplification. Besides, as no coordinates information is explicitly engaged in this network, this kind of modeling can preserve the above two properties. For more details about this part, refer to Appendix B. ### 4.4 SAMPLING BY LANGEVIN DYNAMICS The learned score matching network $s_\theta$ that minimizes Eq. 9a can approximate the score of molecular conformation and following the annealed Langevin dynamics, we provide the pseudocode of the sampling process in Alg. 1, from which we can draw conformations given molecule. #### Algorithm 1 Sampling via annealed Langevin dynamics **Input:** molecular graph $G$, network $s_\theta$, scheduler $\{\sigma_i\}_{i=1}^T$. **Output:** conformation $C$. 1: Sample $C \sim \mathcal{N}(0, \sigma_0^2 I)$. 2: for $i = T, T-1, \ldots, 1$ do 3: $\alpha_i \leftarrow \epsilon \cdot \sigma_i^2 / \sigma_T^2$ \{ $\alpha_i$ is the step size.\} 4: Sample $z_i \sim \mathcal{N}(0, I)$ 5: $C_{i-1} \leftarrow C_i + \alpha_i s_\theta(d(C_i), \sigma_i) + \sqrt{2\alpha_i} z_i$ \{Langevin dynamics.\} 6: end for 7: return $C_0$ ### 4.5 ANALYSIS **Marginal v.s. joint distributions.** From existing literature, the diffusion models are built on adding isotropic Gaussian noise $\mathcal{N}(0, \sigma^2 I)$ to the modeled objects such as pixel values in image generations. In SDDiff, we add isotropic Gaussian noise to molecule conformation (coordinate), and noise is mapped to inter-atomic distances. Thus, entries of noise on distance are not independent, whereas the marginal distribution of distances can be applied for score matching, this is because $$\nabla_{\tilde{d}_i} \log p_\sigma(\tilde{d}_i | d) = \nabla_{\tilde{d}_i} \log p_\sigma(\tilde{d}_i | d_1, 2, \ldots, m) \cdot p_\sigma(\tilde{d}_1, 2, \ldots, i-1, i+1, \ldots, m | d_1, 2, \ldots, m, \tilde{d}_i, d_i)$$ $$= \nabla_{\tilde{d}_i} \log p_\sigma(\tilde{d}_i | d_i) + \nabla_{\tilde{d}_i} \log p_\sigma(\tilde{d}_N(i) | d_N(i), \tilde{d}_i, d_i) \approx \nabla_{\tilde{d}_i} \log p_\sigma(\tilde{d}_i | d_i)$$ where $N(i)$ is the set of edge indices whose edges are incident with edge $i$. The second equality holds because $\tilde{d}_i$ gives no information on the distribution of other perturbed edges that are not incident with edge $i$. Also, $d_j$ gives no information on the distribution of $\tilde{d}_i$ where $i \neq j$. We hypothesize that disregarding the term $\nabla_{\tilde{d}_i} \log p_\sigma(\tilde{d}_N(i) | d_N(i), \tilde{d}_i, d_i)$ introduces no bias. This supposition stems from the observation that possessing knowledge of both $\tilde{d}_i$ and $d_i$, we remain uninformed about the increase or decrease in the value of $d_N(i) - d_N(i)$. **Approximation by optimal transportation (OT).** Given the knowledge of the distributions at end time points $p_{t=0}(x)$ and $p_{t=T}(x)$, the problem of obtaining the distributions in between can be formulated as a Shrodinger Bridge problem whose solution is also the solution of entropic OT. We compute the regularized Wasserstein Barycenter of $p_{t=0}(\tilde{d}|d)$ and $p_{t=T}(\tilde{d}|d)$ by employing the approach presented in a previous work (Benamou et al., 2015). However, the regularization term Table 1: Results of molecular conformation generation. | Methods | GEOM-QM9 COV(%) ↑ | GEOM-QM9 MAT(Å) ↓ | GEOM-Drugs COV(%) ↑ | GEOM-Drugs MAT(Å) ↓ | |-----------|------------------|-------------------|---------------------|---------------------| | | Mean | Median | Mean | Median | Mean | Median | Mean | Median | | CGCF | 78.05 | 82.48 | 0.4219 | 0.3900 | 53.96 | 57.06 | 1.2487 | 1.2247 | | ConfVAE | 77.84 | 88.20 | 0.4154 | 0.3739 | 55.20 | 59.43 | 1.2380 | 1.1417 | | GeoMol | 71.26 | 72.00 | 0.3731 | 0.3731 | 67.16 | 71.71 | 1.0875 | 1.0586 | | ConfGF | 88.49 | 94.31 | 0.2673 | 0.2685 | 62.15 | 70.93 | 1.1629 | 1.1596 | | GeoDiff | 90.54 | 94.61 | 0.2090 | 0.1988 | 89.13 | 97.88 | 0.8629 | 0.8529 | | SDDiff (ours) | 91.07 | 94.69 | 0.2048 | 0.1941 | 90.68 | 98.48 | 0.8564 | 0.8503 | impacts the limiting weighted Barycenter, leading to divergences from $p_{t=0}(\tilde{d}|d)$ to $p_{t=T}(\tilde{d}|d)$. As a result, the regularized Wasserstein Barycenter approach is unsuitable for intermediate distribution approximation. See Appendix C for a more detailed analysis. 5 EXPERIMENT 5.1 EXPERIMENT SETTINGS Datasets. We use two widely used datasets, GEOM-QM9 (Ramakrishnan et al., 2014) and GEOM-Drugs (Axelrod & Gómez-Bombarelli, 2022) for evaluating molecular conformation generation. The GEOM-QM9 dataset comprises molecules with an average of 11 atoms, while the GEOM-Drugs dataset consists of larger molecules with an average of 44 atoms. For a fair comparison, we adopted the same dataset split as GeoDiff (Xu et al., 2021b). For both datasets, the training set contains 40k molecules, the validation set contains 5k molecules and the test set contains 200 molecules. Please refer to GeoDiff (Xu et al., 2021b) for more details regarding the dataset. Evaluation metrics. We use the metrics of COV (coverage) and MAT (matching) (Xu et al.) to measure both diversity and accuracy. Specifically, we align ground truth and generated molecules by the Kabsch algorithm (Kabsch, 1976), and then calculate their difference with root-mean-square-deviation (RMSD). Then the COV and the MAT are defined as follows: $$\text{COV} = \frac{1}{|S_r|} \left\{ C \in S_r \mid \text{RMSD}(C, C') < \delta, \exists C' \in S_g \right\}, \quad \text{MAT} = \frac{1}{|S_r|} \sum_{C' \in S_g} \text{RMSD}(C, C')$$ where $S_g$ and $S_r$ denote generated and ground truth conformations, respectively. Following some baselines (Xu et al., 2021b; Ganea et al., 2021), we set the threshold of COV $\delta = 0.5$ Å for GEOM-QM9 and $\delta = 1.25$ Å for GEOM-Drugs, and generate twice the number of ground truth conformation for evaluation. Baselines. We choose 5 state-of-the-art models for comparison: GeoMol (Ganea et al., 2021) is not a generative model that generates conformation by hand with predicted molecular information. CGCF (Shi et al., 2021) is a two-step method, and ConfVAE (Xu et al., 2021a) is a VAE-based model. ConfGF (Shi et al., 2021) and GeoDiff (Xu et al., 2021b) are two similar works that are also diffusion-based. Other implementation details are provided in Appendix D. 5.2 RESULTS AND ANALYSIS The results of molecular conformation generation are shown in Table 1. The baseline results are obtained from GeoDiff (Xu et al., 2021b). In order to mitigate the impact of the model’s backbone and primarily evaluate the efficacy of distance distribution modeling, we have opted to utilize a backbone that closely resembles that of GeoDiff. This will enable us to more accurately assess the performance of the distance distribution modeling technique while minimizing the potential confounding effects. Figure 4: The ground truth depicted in blue is the distribution of $\sigma \nabla_{\tilde{d}} \log p(\tilde{d}|d)$, whereas the distribution of the model’s outputs is represented by a dashed orange line. It can be observed that as the value of $\sigma$ increases, $\sigma \nabla_{\tilde{d}} \log p(\tilde{d}|d)$ tends to exhibit the characteristics of a long-tailed Gaussian distribution. For a detailed introduction to the figure, we refer readers to Appendix E. of the model’s underlying architecture. The Visualization of selected generated conformation can be found in Appendix G. Score distribution. In the existing literature, the ground truth score function follows a normal distribution. Specifically, the ground truth of score matching objects is set to $\sigma \nabla_x \log p(x|x) \sim \mathcal{N}(0, I)$. The proposed distance distribution diverges from the Gaussian distribution when the perturbation level is significantly large and requires the model to parametrize a non-Gaussian distribution. In order to investigate the efficacy of existing backbones in approximating such distribution, we visually depict the distribution of score functions (not inter-atomic distance), along with our backbone’s output under varying levels of perturbation. The ensuing results have been found in Fig. 4. It is evident that our proposed distribution closely resembles the Gaussian distribution when $\sigma$ is reasonably small. Conversely, when $\sigma$ is substantially large, the proposed score function transforms into a long-tailed Gaussian distribution. Despite this alteration, the model’s output distribution still approximates the proposed score function effectively. This substantiates that the proposed distribution can be effortlessly approximated, and thus can be incorporated into a wide array of models. Planar structure generation As mentioned in Eq. 8b, the score function of distance can be transformed into the score function of conformation almost surely, provided that the conformation is non-planar. Nonetheless, certain molecular structures like benzene rings, exhibit a planar conformation within local regions, which may render this transformation inapplicable (see Fig. 5). A viable solution to optimize these local planar structures further involves utilizing post-processing with variants of rule-based methods (e.g., force field) which encode the unvarying property of certain local structures like benzene rings being planar. Figure 5: Atoms in a benzene ring should be coplanar as ground truth structure, while the generative structure may conflict with such property. 6 CONCLUSION In this study, we present a novel molecular conformation generation approach - SDDiff - by incorporating the shifting score function inspired by molecule thermodynamics. Our main findings include that the distribution of change of inter-atomic distances shifts from Gaussian to Maxwell-Boltzmann distribution under the Gaussian perturbation kernel on molecular conformation, which can be accurately approximated by our approach. By proposing a diffusion-based generative model with a shifting score kernel, we have provided both the mathematical derivation and experimental validation of its correctness. The effectiveness of our approach has been demonstrated through achieving new state-of-the-art results on two widely used molecular conformation generation benchmarks, namely GEOM-Drugs, and GEOM-QM9. Our method effectively captures the essential aspects of molecular dynamics and inter-atomic interactions, leading to improved performance in generating accurate and feasible molecular conformations. REFERENCES Simon Axelrod and Rafael Gómez-Bombarelli. Geom, energy-annotated molecular conformations for property prediction and molecular generation. *Scientific Data*, 9(1):185, 2022. doi: 10.1038/s41597-022-01288-4. URL https://doi.org/10.1038/s41597-022-01288-4 Simon Batzner, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P Mailoa, Mordechai Kornbluth, Nicola Molinari, Tess E Smidt, and Boris Kozinsky. E (3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. *Nature communications*, 13(1):2453, 2022. Jean-David Benamou, Guillaume Carlier, Marco Cuturi, Luca Nenna, and Gabriel Peyré. Iterative bregman projections for regularized transportation problems. *SIAM Journal on Scientific Computing*, 37(2):A1111–A1138, 2015. Stefan Chmiela, Huziel E Sauceda, Igor Poltavsky, Klaus-Robert Müller, and Alexandre Tkatchenko. sgdml: Constructing accurate and data efficient molecular force fields using machine learning. *Computer Physics Communications*, 240:38–45, 2019. Fabian Fuchs, Daniel Worrall, Volker Fischer, and Max Welling. Se (3)-transformers: 3d roto-translation equivariant attention networks. *Advances in Neural Information Processing Systems*, 33:1970–1981, 2020. Octavian Ganea, Lagnajit Pattanaik, Connor Coley, Regina Barzilay, Klavs Jensen, William Green, and Tommi Jaakkola. Geomol: Torsional geometric generation of molecular 3d conformer ensembles. *Advances in Neural Information Processing Systems*, 34:13757–13769, 2021. Johannes Gasteiger, Janek Groß, and Stephan Günnemann. Directional message passing for molecular graphs. *arXiv preprint arXiv:2003.03123*, 2020. Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In *International conference on machine learning*, pp. 1263–1272. PMLR, 2017. David J Griffiths and Darrell F Schroeter. *Introduction to quantum mechanics*. Cambridge university press, 2018. Jiaqi Han, Wenbing Huang, Tingyang Xu, and Yu Rong. Equivariant graph hierarchy-based neural networks. *Advances in Neural Information Processing Systems*, 35:9176–9187, 2022. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. *Advances in Neural Information Processing Systems*, 33:6840–6851, 2020. Emiel Hoogeboom, Victor García Satorras, Clément Vignac, and Max Welling. Equivariant diffusion for molecule generation in 3d. In *International Conference on Machine Learning*, pp. 8867–8887. PMLR, 2022. Bowen Jing, Gabriele Corso, Jeffrey Chang, Regina Barzilay, and Tommi S Jaakkola. Torsional diffusion for molecular conformer generation. In *Advances in Neural Information Processing Systems*. Wolfgang Kabsch. A solution for the best rotation to relate two sets of vectors. *Acta Crystallographica Section A: Crystal Physics, Diffraction, Theoretical and General Crystallography*, 32(5):922–923, 1976. Paul Labute. Lowmodemd-implicit low-mode velocity filtering applied to conformational search of macrocycles and protein loops. *Journal of chemical information and modeling*, 50(5):792–800, 2010. Benjamin Kurt Miller, Mario Geiger, Tess E Smidt, and Frank Noé. Relevance of rotationally equivariant convolutions for predicting molecular properties. *arXiv preprint arXiv:2008.08461*, 2020. Frank Noé, Simon Olsson, Jonas Köhler, and Hao Wu. Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning. *Science*, 365(6457):eaaw1147, 2019.
QLoepRnoue
In the formulation on decoding, (i.e., equation between eq. (2) and eq.(3)), can you please clarify on why orthogonality property ensures that $E_X(x_i) ⊘ E_X(x_0) $ will produce a vector orthogonal to $E_X(x_0)$ when the distance between two samples is large? Also what does the noise mean? Does it mean that it’s near zero so that it is a negligible component?
Decodable and Sample Invariant Continuous Object Encoder Dehao Yuan, Furong Huang, Cornelia Fermüller & Yiannis Aloimonos Department of Computer Science University of Maryland College Park, MD 20740, USA {dhyuan, furongh, fermulcm, jyaloimo}@umd.edu Abstract We propose Hyper-Dimensional Function Encoding (HDFE). Given samples of a continuous object (e.g., a function), HDFE produces an explicit vector representation of the given object, invariant to the sample distribution and density. Sample distribution and density invariance enables HDFE to consistently encode continuous objects regardless of their sampling, and therefore allows neural networks to receive continuous objects as inputs for machine learning tasks, such as classification and regression. Besides, HDFE does not require any training and is proved to map the object into an organized embedding space, which facilitates the training of the downstream tasks. In addition, the encoding is decodable, which enables neural networks to regress continuous objects by regressing their encodings. Therefore, HDFE serves as an interface for processing continuous objects. We apply HDFE to function-to-function mapping, where vanilla HDFE achieves competitive performance with the state-of-the-art algorithm. We apply HDFE to point cloud surface normal estimation, where a simple replacement from PointNet to HDFE leads to 12% and 15% error reductions in two benchmarks. In addition, by integrating HDFE into the PointNet-based SOTA network, we improve the SOTA baseline by 2.5% and 1.7% on the same benchmarks. 1 Introduction Continuous objects are objects that can be sampled with arbitrary distribution and density. Examples include point clouds [Guo et al., 2020], event-based vision data [Gallego et al., 2020], and sparse meteorological data [Lu et al., 2021]. A crucial characteristic of continuous objects, which poses a challenge for learning, is that their sample distribution and size varies between training and test sets. For example, point cloud data in the testing phase may be sparser or denser than that in the training phase. A framework that handles this inconsistency is essential for continuous object learning. When designing the framework, four properties are desirable: (1) Sample distribution invariance: the framework is not affected by the distribution from which the samples are collected. (2) Sample size invariance: the framework is not affected by the number of samples. (3) Explicit representation: the framework generates outputs with fixed dimensions, such as fixed-length vectors. (4) Decodability: the continuous object can be reconstructed at arbitrary resolution from the representation. Sample invariance (properties 1 and 2) ensures that differently sampled instances of the same continuous objects are treated consistently, thereby eliminating the ambiguity caused by variations in sampling. An explicit representation (property 3) enables a neural network to receive continuous objects as inputs, by consuming the encodings of the objects. Decodability (property 4) enables a neural network to predict a continuous object, by first predicting the representation and then decoding it back to the continuous object. Fig. 1 illustrates the properties and their motivations. However, existing methodologies, which we divide into three categories, are limited when incorporating the four properties. (1) Discrete framework. The methods discretize continuous objects and process them with neural networks. For example, Liu et al. [2019] uses a 3D-CNN to process voxelized point clouds, Kim et al. [2017] uses an RNN to predict particle trajectories. These methods are not sample invariant – the spatial and temporal resolution must be consistent across the training Figure 1: **Left:** HDFE encodes continuous objects into fixed-length vectors without any training. The encoding is not affected by the distribution and size with which the object is sampled. The encoding can be decoded to reconstruct the continuous object. **Right:** Applications of HDFE. HDFE can be used to perform machine learning tasks (e.g., classification, regression) on continuous objects. HDFE also enables neural networks to regress continuous objects by predicting their encodings. and testing phases. (2) **Mesh-grid-based framework.** They operate on continuous objects defined on mesh grids and achieve discretization invariance (the framework is not affected by the resolution of the grids). Examples include the Fourier transform (Salih [2012]) and the neural operator (Li et al. [2020a]). But they do not apply to sparse data like point clouds. (3) **Sparse framework.** They operate on sparse samples drawn from the continuous object. Kernel methods (Hofmann et al. [2008]) work for non-linear regression, classification, etc. But they do not provide an explicit representation of the function. PointNet (Qi et al. [2017a]) receives sparse point cloud input and produces an explicit representation, but the representation is not decodable (see Appendix B). In addition, all the frameworks require extra training of the encoder, which is undesired in some scarce data scenarios. Currently, only the vector function architecture (VFA) (Frady et al. [2021]) can encode an explicit function into a vector through sparse samples, while preserving all four properties. However, VFA is limited by its strong assumption of the functional form. VFA requires the input function to conform to \( f(x) = \sum_k \alpha_k \cdot K(x, x_k) \), where \( K : X \times X \rightarrow \mathbb{R} \) is a kernel defined on \( X \). If the input function does not conform to the form, VFA cannot apply or induces large errors. In practice, such requirement is rarely satisfied. For example, \( f(x) \) cannot even approximate a constant function \( g(x) = 1 \): to approximate the constant function, the kernel \( K \) must be constant. But with the constant kernel, \( f(x) \) cannot approximate other non-constant functions. Such limitation greatly hinders the application of VFA. Kindly refer to Appendix C for failure cases and detailed discussions. We propose hyper-dimensional function encoding (HDFE), which does not assume any explicit form of input functions but only requires Lipschitz continuity (Appendix D illustrates some suitable input types). Consequently, HDFE can encode a much larger class of functions, while holding all four properties without any training. Thanks to the relaxation, HDFE can be applied to multiple real-world applications that VFA fails, which will be elaborated on in the experiment section. HDFE maps the samples to a high-dimensional space and computes weighted averages of the samples in that space to capture collective information of all the samples. A challenge in HDFE design is maintaining sample invariance, for which we propose a novel iterative refinement process to decide the weight of each sample. The contributions of our paper can be summarized as follows: - We present HDFE, an encoder for continuous objects without any training that exhibits sample invariance, decodability, and distance-preservation. To the best of our knowledge, HDFE is the only algorithm that can encode Lipschitz functions while upholding all the four properties. - We provide extensive theoretical foundation for HDFE. We prove that HDFE is equipped with all the desirable properties. We also verify them with empirical experiments. - We evaluate HDFE on mesh-grid data and sparse data. In the mesh-grid data domain, HDFE achieves competitive performance as the specialized state-of-the-art (SOTA) in function-to-function mapping tasks. In the sparse data domain, replacing PointNet with HDFE leads to average error decreases of 12% and 15% in two benchmarks, and incorporating HDFE into the PointNet-based SOTA architecture leads to average error decreases of 2.5% and 1.7%. ## 2 Problem Definition and Methodology Let \( F \) be the family of \( c \)-Lipschitz continuous functions defined on a compact domain \( X \) with a compact range \( Y \). In other words, \( \forall f \in F, f : X \rightarrow Y \) and \( d_Y(f(x_1), f(x_2)) \leq c \cdot d_X(x_1, x_2) \), where \((X, d_X)\) and \((Y, d_Y)\) are metric spaces, and \(c\) is the Lipschitz constant. Our goal is to find a representation algorithm that can encode a function \(f \in F\) into a vector representation \(F \in \mathbb{C}^N\). To construct it, we will feed samples of the function mapping \(\{(x_i, f(x_i))\}\) to the representation algorithm, which will generate the vector representation based on these samples. We require the function representation to satisfy the following: (1) Sample distribution invariance: the function representation is “not affected” by the distribution from which the samples are collected. (2) Sample size invariance: the function representation is “not affected” by the number of samples. (3) Fixed-length representation: all functions are represented by fixed-length vectors. (4) Decodability: as new inputs query the function representation, it can reconstruct the function values. To better formalize the heuristic expression of “not affected” in Properties 1 and 2, we introduce the definition of asymptotic sample invariance to formulate an exact mathematical expression: **Definition 1 (Asymptotic Sample Invariance).** Let \(f : X \rightarrow Y\) be the function to be encoded, \(p : X \rightarrow (0, 1)\) be a probability density function (pdf) on \(X\), \(\{x_i\}_{i=1}^n \sim p(X)\) be \(n\) independent samples of \(X\). Let \(F_n\) be the representation computed from the samples \(\{x_i, f(x_i)\}_{i=1}^n\), asymptotic sample invariance implies \(F_n\) converges to a limit \(F_\infty\) independent of the pdf \(p\). In this definition, sample size invariance is reflected because the distance between \(F_m\) and \(F_n\) can be arbitrarily small as \(m, n\) become large. Sample distribution invariance is reflected because the limit \(F_\infty\) does not depend on the pdf \(p\), as long as \(p\) is supported on the whole input space \(X\). With the problem definition above, we present our hyper-dimensional function encoding (HDFE) approach. Sec. 2.1 introduces how HDFE encodes explicit functions. Sec. 2.2 generalizes HDFE to implicit function encoding. Sec. 2.3 realizes HDFE for vector-valued function encoding. Finally, Sec. 2.4 establishes the theorems that HDFE is asymptotic sample invariant and distance-preserving. Throughout the section, we assume the functions are \(c\)-Lipschitz continuous. The assumption will also be explained in Section 2.4. Kindly refer to Appendix A for the table of notations. ### 2.1 Explicit Function Encoding **Encoding** HDFE is inspired by the methodology of hyper-dimensional computing (HDC) [Kleyko et al., 2023], where one encodes an indefinite number of data points into a fixed-length vector. The common practice is to first map the data points to a high-dimensional space and then average the data point representations in that space. The resulting superposed vector can represent the distribution of the data. Following the idea, we represent an explicit function as the superposition of its samples: \[ F = \sum_i w_i \cdot E(x_i, y_i) \] where \(E\) maps function samples to a high-dimensional space \(\mathbb{C}^N\). The question remains (a) how to design the mapping to make the vector decodable; (b) how to determine the weight of each sample \(w_i\) so that the representation is sample invariant. We will answer question (a) first and leave question (b) to the iterative refinement section. Regarding the selection of \(E(x, y)\), a counter-example is a linear mapping, where the average of the function samples in the high-dimensional space will degenerate to the average of the function samples, which does not represent the function. To avoid degeneration, the encodings of the samples should not interfere with each other if they are far from each other. Specifically, if \(d_X(x_1, x_2)\) is larger than a threshold \(\epsilon_0\), their function values \(f(x_1), f(x_2)\) may be significantly different. In this case, we want \(E(x_1, y_1)\) to be orthogonal to \(E(x_2, y_2)\) to avoid interference. On the other hand, if \(d_X(x_1, x_2)\) is smaller than the threshold \(\epsilon_0\), by the Lipschitz continuity, the distance between their function values \(d_Y(f(x_1), f(x_2))\) is bounded by \(c\epsilon_0\). In this case, we want \(E(x_1, y_1)\) to be similar to \(E(x_2, y_2)\). We call the tunable threshold \(\epsilon_0\) the receptive field of HDFE, which will be discussed in Sec. 2.4. Denoting the similarity between vectors as \(\langle \cdot, \cdot \rangle\), the requirement can be formulated as: \[ \begin{cases} \langle E(x, y), E(x', y') \rangle \approx 1 & \text{if } d_X(x, x') < \epsilon_0 \\ \text{decays to 0 quickly} & \text{if } d_X(x, x') > \epsilon_0 \end{cases} \] In addition to avoiding degeneration, we also require the encoding to be decodable. This can be achieved by factorizing \(E(x, y)\) into two components: We first map \(x_i\) and \(y_i\) to the high-dimensional space \(\mathbb{C}^N\) through two different mappings \(E_X\) and \(E_Y\). To ensure equation (2) is satisfied, we require \(\langle E_X(x), E_X(x') \rangle \approx 1\) when \(d_X(x, x') < \epsilon_0\) and that it decays to 0 otherwise. The property of $E_Y$ will be mentioned later in the discussion of decoding. Finally, we compute the joint embedding of $x_i$ and $y_i$ through a binding operation $\otimes$: $E(x_i, y_i) = E_X(x_i) \otimes E_Y(y_i)$. We will show that the representation is decodable if the binding operation satisfies these properties: 1. commutative: $x \otimes y = y \otimes x$ 2. distributive: $x \otimes (y + z) = x \otimes y + x \otimes z$ 3. similarity preserving: $\langle x \otimes y, x \otimes z \rangle = \langle y, z \rangle$. 4. invertible: there exists an associative, distributive, similarity preserving operator that undoes the binding, called unbinding $\ominus$, satisfying $(x \otimes y) \ominus z = (x \ominus z) \otimes y$ and $(x \otimes y) \ominus x = y$. The binding and unbinding operations can be analogous to multiplication and division, where the difference is that binding and unbinding operate on vectors and are similarity preserving. Decoding With the properties of the two operations, the decoding of the function representation can be performed by a similarity search. Given the function representation $F \in \mathbb{C}^N$, and a query input $x_0 \in X$, the estimated function value $\hat{y}_0$ is computed by: $$\hat{y}_0 = \arg\max_{y \in Y} \langle F \otimes E_X(x_0), E_Y(y) \rangle$$ (3) The distributive property allows the unbinding operation to be performed sample-by-sample. The invertible property allows the unbinding operation to recover the encoding of the function values: $E_X(x_i) \otimes E_Y(f(x_i)) \otimes E_X(x_0) \approx E_Y(f(x_i))$ when $d_X(x_0, x_i)$ is small. The similarity preserving property ensures that $[E_X(x_i) \otimes E_X(x_0)] \otimes E_Y(f(x_i))$ produces a vector orthogonal to $E_Y(f(x_0))$ when the distance between two samples is large, resulting in a summation of noise. The following formula illustrates the idea and Appendix E details the derivation. $$F \otimes E_X(x_0) = \sum_i w_i \cdot [E_X(x_i) \otimes E_Y(f(x_i)) \otimes E_X(x_0)]$$ $$= \sum_{d(x_0, x_i) < \epsilon_0} w_i \cdot E_Y(f(x_i)) + \sum_{d(x_0, x_i) > \epsilon_0} w_i \cdot [E_X(x_i) \otimes E_X(x_0)] \otimes E_Y(f(x_i))$$ $$\approx E_Y(f(x_0))$$ noise, since orthogonal to $E_Y(f(x_0))$ After computing $F \otimes E_X(x_0)$, we search for $y \in Y$ such that the cosine similarity between $E_Y(y)$ and $F \otimes E_X(x_0)$ is maximized. We desire that $\frac{\partial}{\partial y} \langle E_Y(y), E_Y(y') \rangle > 0$ for all $y$ and $y'$ so that the optimization can be solved by gradient descent. See Appendix F for detailed formulation. Since the decoding only involves measuring cosine similarity, in the last step, we normalize the function representation to achieve sample size invariance without inducing any loss: $$F = \text{normalize}\left(\sum_i w_i \cdot [E_X(x_i) \otimes E_Y(f(x_i))]\right)$$ (4) Iterative refinement for sample distribution invariance In equation (4), we are left to determine the weight of each sample so that the representation is sample invariant. To address this, we propose an iterative refinement process to make the encoding invariant to the sample distribution. We initialize $w_i = 1$ and compute the initial function vector. Then we compute the similarity between the function vector and the encoding of each sample. We then add the sample encoding with the lowest similarity to the function vector and repeat this process until the lowest similarity no longer increases. By doing so, the output will be the center of the smallest ball containing all the sample encodings. Such output is asymptotic sample invariant because the ball converges to the smallest ball containing $\bigcup_{x \in X} [E_X(x) \otimes E_Y(f(x))]$ as the sample size goes large, where the limit ball only depends on the function. We left the formal proof to the Appendix H.1. In Appendix I.3, we introduce a practical implementation of the iterative refinement for saving computational cost. 2.2 Implicit Function Encoding Generalizing HDFE to implicit functions is fairly straightforward. Without loss of generality, we assume an implicit function is represented as $f(x) = 0$. Then it can be encoded using equation [5]. where the weights \( w_x \) are determined by the iterative refinement: \[ F_{f=0} = \text{normalize} \left( \sum_{x : f(x)=0} w_x \cdot E_X(x) \right) \] The formula can be understood as encoding an explicit function \( g \), where \( g(x) = 1 \) if \( f(x) = 0 \) and \( g(x) = 0 \) if \( f(x) \neq 0 \). Then by choosing \( E_Y(1) = 1 \) and \( E_Y(0) = 0 \) in equation [4], we can obtain equation [5]. The formula can be interpreted in a simple way: a continuous object can be represented as the summation of its samples in a high-dimensional space. ### 2.3 Vector-Valued Function Encoding In the previous sections, we established a theoretical framework for encoding \( c \)-Lipschitz continuous functions. In this section, we put this framework into practice by carefully choosing appropriate input and output mappings \( E_X \), \( E_Y \), the binding operator \( \otimes \), and the unbinding operator \( \ominus \) in equation [4]. We will first state our choice and then explain the motivation behind it. **Formulation** Let \((x, y)\) be one of the function samples, where \( x \in \mathbb{R}^m \) and \( y \in \mathbb{R} \), the mapping \( E_X : X \to \mathbb{C}^N \), \( E_Y : \mathbb{R} \to \mathbb{C}^N \) and the operations \( \otimes \) and \( \ominus \) are chosen as: \[ E_X(x) := \exp \left( i \cdot \alpha \frac{\Phi x}{m} \right) \quad E_Y(y) := \exp \left( i \beta \Psi y \right) \] \[ E_X(x) \otimes E_Y(y) := \exp \left( i \cdot \alpha \frac{\Phi x}{m} + i \beta \Psi y \right) \] \[ E_X(x) \ominus E_Y(y) := \exp \left( i \cdot \alpha \frac{\Phi x}{m} - i \beta \Psi y \right) \] where \( i \) is the imaginary unit, \( \Phi \in \mathbb{R}^{N \times m} \) and \( \Psi \in \mathbb{R}^N \) are random fixed matrices where all elements are drawn from the standard normal distribution. \( \alpha \) and \( \beta \) are hyper-parameters controlling the properties of the mappings. **Motivation** The above way of mapping real vectors to high-dimensional spaces is modified from Komer & Eliasmith (2020), known as fractional power encoding (FPE). We introduce the motivation for adopting this technique heuristically. In Appendix C, we elaborate on the relation between FPE and radial basis function (RBF) kernels, which gives a rigorous reason for adopting this technique. First, the mappings are continuous, which can avoid losses when mapping samples to the embedding space. Second, the receptive field of the input mapping \( E_X \) (the \( \epsilon_0 \) in equation [2]) can be adjusted easily through manipulating \( \alpha \). Fig. 2a demonstrates how manipulating \( \alpha \) can alter the behavior of \( E_X \). Typically, \( \alpha \) has a magnitude of 10 for capturing the high-frequency component of the function. Thirdly, the decodability of the output mapping \( E_Y \) can easily be achieved by selecting appropriate \( \beta \) values. We select \( \beta \) such that \( \langle E_Y(0), E_Y(1) \rangle \) is equal to 0 to utilize the space \( \mathbb{C}^N \) maximally while keeping the gradient of \( \langle E_Y(y_1), E_Y(y_2) \rangle \) non-zero for all \( y_1 \) and \( y_2 \). Per the illustration in Fig. 2a, the optimal choice for \( \beta \) is 2.5. Finally, the binding and unbinding operators are defined as the element-wise multiplication and division of complex vectors, which satisfy the required properties. ### 2.4 Properties of HDFE HDFE produces an explicit decodable representation of functions. In this section, we state a theorem on the asymptotic sample invariance, completing the claim that HDFE satisfies all four desirable properties. We study the effect of the receptive field on the behavior of HDFE. We also state that HDFE is distance-preserving and discuss the potential of scaling HDFE to high-dimensional data. We leave the proofs to Appendix H and verify the claims with empirical experiments in Appendix I. We include several empirical experiments of HDFE in Appendix I, including the cost of the iterative refinement and its practical implementation, the effectiveness of sample invariance in a synthetic regression problem, and the analysis of information loss when encoding continuous objects. **Theorem 1** (Sample Invariance). HDFE is asymptotic sample invariant (defined at Definition 7). HDFE being sample invariant ensures functions realized with different sampling schemes are treated invariantly. Kindly refer to Appendix I.1 and I.1 for the proof and empirical experiments. Theorem 2 (Distance Preserving). Let \( f, g : X \rightarrow Y \) be both \( c \)-Lipschitz continuous, then their L2-distance is preserved in the encoding. In other words, HDFE is an isometry: \[ ||f - g||_{L_2} = \int_{x \in X} |f(x) - g(x)|^2 dx \approx b - a \langle F, G \rangle \] HDFE being isometric indicates that HDFE encodes functions into an organized embedding space, which can reduce the complexity of the machine learning architecture when training downstream tasks on the functions. Kindly refer to Appendix H.2 and I.2 for the proof and empirical experiment. Effect of receptive field Fig. 2 shows the reconstruction results of a 1d function \( f : \mathbb{R} \rightarrow \mathbb{R} \), which demonstrates that HDFE can reconstruct the original functions given a suitable receptive field and a sufficiently large embedding space. When using a large receptive field (Fig. 2b), the high-frequency components will be missed by HDFE. When using a small receptive field (Fig. 2c), the high-frequency components can be captured, but it may cause incorrect reconstruction if the dimension of the embedding space is not large enough. Fortunately, reconstruction failures can be eliminated by increasing the dimension of the embedding space (Fig. 2d). ![Figure 2](image) (a) Similarity between \( E_X(x) \) and \( E_X(0) \). (b) Large recept. field. Dimension = 1000. (c) Small recept. field. Dimension = 1000. (d) Small recept. field. Dimension = 2000. Figure 2: (a): How \( \alpha \) and \( \beta \) in equation 6 affects the receptive field of HDFE. (b)-(d): Functions can be reconstructed accurately given a suitable receptive field and encoding dimension. To capture the high-frequency component of the function, a small receptive field and a high dimension are required. Scale to high-dimensional input HDFE produces the function encoding based on the sparsely collected samples. Unlike mesh-grid based methods, which require a mesh-grid domain and suffer from an exponential increase in memory and computational cost as the data dimension increases, HDFE uses superposition to encode all the samples defined in the support of the function. This means the required dimensionality only depends on the size of the support, not on the data dimensionality. Even if the data dimensionality is high, HDFE can mitigate this issue as long as the data reside in a low-rank subspace. Appendix I.5 gives an empirical experiment of high-dimensional input to show the potential of HDFE to work in low-rank high-dimensional scenarios. 3 EXPERIMENT In this section, we present two applications of HDFE. Sec. 3.1 showcases how HDFE can be leveraged for solving partial differential equations (PDE). This exemplifies how HDFE can enhance neural networks to receive function inputs and produce function outputs. In Sec. 3.2 we apply HDFE to predict the surface normal of point clouds. This demonstrates how HDFE can enhance neural networks to process implicit functions and extract relevant attributes. 3.1 PDE SOLVER Several neural networks have been developed to solve partial differential equations (PDE), such as the Fourier neural operator (Li et al., 2020a). In this section, we compare our approach using HDFE against the current approaches and show that we achieve on-par performance. VFA does not apply to the problem since the input and output functions do not conform to the form that VFA requires. Architecture To solve PDEs using neural networks, we first encode the PDE and its solution into their vector embeddings using HDFE. Then, we train a multi-layer perceptron to map the embedding of the PDE to the embedding of its solution. The optimization target is the cosine similarity between the predicted embedding and the true embedding. Since the embeddings are complex vectors, we adopt a Deep Complex Network (Trabelsi et al., 2017) as the architecture of the multi-layer perceptron. The details are presented in Appendix 1.1. Once the model is trained, we use it to predict the embedding of the solution, which is then decoded to obtain the actual solution. **Dataset** We use 1d Burgers’ Equation (Su & Gardner, 1969) and 2d Darcy Flow (Tek, 1957) for evaluating our method. The error is measured by the absolute distance between the predicted solution and the ground-truth solution. The benchmark (Li et al., 2020c) has been used to evaluate neural operators widely. For the 1d Burgers’ Equation, it provides 2048 PDEs and their solutions, sampled at a 1d mesh grid at a resolution of 8192. For the 2d Darcy Flow, it provides 2048 PDEs and their solutions, sampled at a 2d mesh grid at a resolution of $241 \times 241$. **Baselines** We evaluate our HDFE against other neural network PDE-solving methods. These include: PCANN (Bhattacharya et al., 2021); MGKN: Multipole Graph Neural Operator (Li et al., 2020c); FNO: Fourier Neural Operator (Li et al., 2020a). ![Figure 3](image) **Figure 3**: HDFE solves a PDE by predicting the encoding of its solution and then reconstructing at points, so the error consists of a function encoding prediction error and a reconstruction error. **Left**: Prediction error of different methods under different testing resolutions, evaluated on the 1d Burgers’ equation. **Mid**: The reconstruction error (in HDFE) dominates the function encoding prediction error, while the reconstruction error can be reduced by increasing the dimensionality of the embedding. **Right**: Prediction error of different methods evaluated on 2d Darcy Flow. When decoding is required, our approach achieves $\sim 55\%$ lower prediction error than MGKN and PCANN and competitive performance to FNO. The error of HDFE consists of two components. The first is the error arising from predicting the solution embedding, and the second is the reconstruction error arising when decoding the solution from the predicted embedding. In contrast, FNO directly predicts the solution, and hence, does not suffer from reconstruction error. If we consider both errors, HDFE achieves comparable performance to FNO. Fig. 3 shows the comparison. On the other hand, when decoding is not required, our approach achieves lower error than FNO. Such scenarios happen frequently when we use functions only as input, for example, the local geometry prediction problem in Experiment 5.2. Despite the presence of reconstruction error, the reconstruction error can be reduced by increasing the embedding dimension, as shown in Figure 3 (Mid). Increasing the embedding dimension may slightly increase the function prediction error, possibly because the network is not adequately trained due to limited training data and some overfitting. We conjecture that this prediction error can be reduced with more training data. In addition to comparable performance, HDFE overcomes two limitations of FNO. First, HDFE provides an explicit function representation, resolving the restriction of FNO, which only models the mappings between functions without extracting attributes from them. Second, HDFE not only works for grid-sampled functions but also for sparsely sampled functions. ### 3.2 Unoriented Surface Normal Estimation Next, we apply HDFE to extract attributes of functions, a setting where neither neural operators nor VFA applies, because neural operators do not consume sparse samples and VFA does not encode implicit functions. We predict the unoriented surface normal from 3d point cloud input. Baselines We compare our HDFE with two baselines. In the first baseline, we compare the vanilla HDFE with the PCPNet (Guerrero et al., 2018), which is a vanilla PointNet (Qi et al., 2017a) architecture. We replace the PointNet with our HDFE appended with a deep complex network (Trabelsi et al., 2017). In the second baseline, we incorporate HDFE into HSurf-Net (Li et al., 2022), which is the state-of-the-art PointNet-based normal estimator. In both settings, we compare the effect of data augmentation in the HDFE module, where we add noise to the weight of each sample when generating the patch encoding by HDFE. Kindly refer to Appendix I.2/I.3 for details. Dataset and metrics We use the root mean squared angle error (RMSE) as the metrics, evaluated on the PCPNet (Guerrero et al., 2018) and FamousShape (Li et al., 2022) datasets. We compare the robustness for two types of data corruption: (1) point density: sampling subsets of points with two regimes, where gradient simulates the effects of distance from the sensor, and strips simulates local occlusions. (2) point perturbations: adding Gaussian noise to the point coordinates. Table 1 reports normal angle RMSE comparison with the baselines on PCPNet and FamousShape. Appendix K reports the ablation studies examining the effect of the receptive field size and the dimensionality. Table 1: Unoriented normal RMSE results on datasets PCPNet and FamousShape. Replacing from PointNet to HDFE improves performance. Integrating HDFE with the SOTA estimator improves its performance. Applying data augmentation to HDFE improves its performance. | Dataset | None | Low | Med | High | Stripe | Gradient | Average | |---------|------|-----|-----|------|--------|----------|---------| | PCPNet (Guerrero et al., 2018) | 9.48 | 11.05 | 17.16 | 25.53 | 11.61 | 19 | 13.67 | | Difference | 0.16 | 0.46 | 1.11 | 0.31 | 0.12 | 3.27 | 0.91 | | PCPNet + PointNet + HDFE + Aug. | 7.97 | 10.72 | 17.99 | 22.76 | 9.47 | 8.67 | 12.88 | | Difference | 1.87 | 0.79 | 0.35 | 0.08 | 2.30 | 4.79 | 1.70 | | HSurf-Net (Li et al., 2022) | 4.13 | 8.64 | 16.14 | 21.64 | 5.02 | 4.87 | 10.07 | | Difference | 0.17 | 0.14 | 0.01 | 0.00 | 0.16 | 0.16 | 0.11 | | HSurf-Net + HDFE | 3.89 | 8.78 | 16.14 | 21.65 | 4.60 | 4.51 | 9.93 | | Difference | 0.41 | 0.00 | 0.01 | 0.01 | 0.58 | 0.52 | 0.25 | HDFE significantly outperforms the PointNet baseline. When processing the local patches, we replace PointNet with HDFE followed by a neural network. This replacement leads to an average reduction in error of 1.70 and 3.79 on each dataset. This is possibly because HDFE encodes the distribution of the local patch, which is guaranteed by the decodability property of HDFE. PointNet, on the other hand, does not have such guarantee. Specifically, PointNet aggregates point cloud features through a max-pooling operation, which may omit points within the point cloud and fail to adequately capture the patch’s distribution. Consequently, in tasks where modeling the point cloud distribution is crucial, such as normal estimation, PointNet exhibits higher error compared to HDFE. HDFE, as a plug-in module, improves the SOTA baseline significantly. HSurf-Net (Li et al., 2022), the SOTA method in surface normal estimation, introduces many features, such as local aggregation layers, and global shift layers specifically for the task. Notably, HDFE does not compel such features. We incorporate HDFE into HSurf-Net (See Appendix I.3 for details), where it leads to average error reductions of 0.25/0.30 on each dataset. Notably, such incorporation can be performed on any PointNet-based architecture across various tasks. Incorporating HDFE to other PointNet-based architectures for performance and robustness gain can be a future research direction. HDFE promotes stronger robustness to point density variance. In both comparisons and both benchmarks, HDFE exhibits stronger robustness to point density variation than its PointNet counterpart, especially in the Density-Gradient setting (error reduction of 4.79/7.37/0.52/0.46). This shows the effectiveness of the HDFE’s sample invariance property and the embedding augmentation. Sample invariance ensures a stable encoding of local patches when the point density changes. The embedding augmentation is a second assurance to make the system more robust to density variation. 4 RELATED WORK 4.1 MESH-GRID-BASED FRAMEWORK These methods operate on continuous objects (functions) defined on mesh grids with arbitrary resolution. They enjoy discretization invariance, but they do not receive sparse samples as input. Fourier transform (FT) can map functions from their time domains to their frequency domains. By keeping a finite number of frequencies, FT can provide a vector representation of functions. 1D- FT has been a standard technique for signal processing (Salih, 2012) and 2D-FT has been used for image processing (Jain, 1989). FT is also incorporated into deep learning architectures (Fan et al., 2019; Sitzmann et al., 2020). However, FT is not scalable since the $n$-dimensional Fourier transform returns an $n$-dimensional matrix, which is hard to process when $n$ gets large. Neural operator is a set of toolkits to model the mapping between function spaces. The technique was pioneered in DeepONet (Lu et al., 2019) and a series of tools were developed (Li et al., 2021, 2020b,c; Guibas et al., 2021; Kovachki et al., 2021) for studying the problem. The most well-known work is the Fourier Neural Operator (FNO) (Li et al., 2020a). The approach showed promising accuracy and computational efficiency. Though proposed in 2020, FNO is still the first choice when mapping between function spaces (Wen et al., 2023; Renn et al., 2023; Gopakumar et al., 2023). Despite their success, neural operators lack explicit function representations and their application is limited to mappings between function spaces. 4.2 SPARSE FRAMEWORK These methods, including HDFE, work with sparse samples from continuous objects. PointNet (Qi et al., 2017a) is a neural network architecture for processing point cloud data. It uses multi-layer perceptrons to capture local features of every point and then aggregates them into a global feature vector that’s invariant to the order of the input points. PointNet and its variation (Zaheer et al., 2017; Qi et al., 2017b; Joseph-Rivlin et al., 2019; Yang et al., 2019; Zhao et al., 2019; Duan et al., 2019; Yan et al., 2020) have been widely applied to sparse data processing, for example, for object classification (Yan et al., 2020; Lin et al., 2019), semantic segmentation (Ma et al., 2020; Li et al., 2019), and object detection (Qi et al., 2020; Yang et al., 2020) with point cloud input. However, PointNet does not produce a decodable representation. Specifically, after encoding a point cloud with PointNet, it is difficult to decide whether a point is drawn from the point cloud distribution. Besides, PointNet is also sensitive to perturbations in the input point cloud. Kernel methods (Hofmann et al., 2008) are a type of machine learning algorithm that transforms data into a higher-dimensional feature space via a kernel function, such as the radial basis function (RBF) (Cortes & Vapnik, 1995), which can capture nonlinear relationships. Though kernel methods can predict function values at any query input (i.e. decodable) and the prediction is invariant to the size and distribution of the training data, kernel methods do not produce an explicit representation of functions, so they are only used for fitting functions but not processing or predicting functions. Vector function architecture VFA (Frady et al., 2021) encodes a function of the form $f(x) = \sum_k \alpha_k \cdot K(x, x_k)$ into a vector, where $K : X \times X \rightarrow \mathbb{R}$ is a kernel defined on the input space. VFA and HDFE share a similar high-level idea. They both map the samples to high-dimensional space and compute the weighted average in that space. However VFA determines the weight by relying on the assumption of the functional form. HDFE, on the other hand, uses iterative refinement to solve the weights. The iterative refinement coupled with the binding operation relaxes the assumption required by VFA and enables encoding functions across a larger class of inputs. In addition, VFA is limited to empirical experiments such as non-linear regression and density estimation, without practical applications. In comparison, HDFE is demonstrated to be applicable to real-world problems. 5 CONCLUSION We introduced Hyper-Dimensional Function Encoding (HDFE), which constructs vector representations for continuous objects. The representation, without any training, is sample invariant, decodable, and isometric. These properties position HDFE as an interface for the processing of continuous objects by neural networks. Our study demonstrates that the HDFE-based architecture attains significantly reduced errors compared to PointNet-based counterparts, especially in the presence of density perturbations. This reveals that HDFE presents a promising complement to PointNet and its variations for processing point cloud data. Adapting HDFE (e.g. imposing rotational invariance to HDFE) to tasks like point cloud classification and segmentation offers promising avenues for exploration. Still, HDFE does possess limitations in encoding capacity. For functions defined over large domains or highly non-linear functions, HDFE can experience underfitting. The exploration of techniques to enhance HDFE’s capacity remains promising research. Regardless, HDFE already shows strong applicability in low-dimensional (1D, 2D, 3D) inputs. 6 ACKNOWLEDGEMENT The support of NSF under awards OISE 2020624 and BCS 2318255, and ARL under the Army Cooperative Agreement W911NF2120076 is greatly acknowledged. The in-depth discussions with Denis Kleyko, Paxon Frady, Bruno Olshausen, Christopher Kynn, Friedrich Sommer, Pentti Kanerva significantly improved the manuscript and we are grateful. REFERENCES Mahmoud Abbasi, Amin Shahraki, and Amir Taherkordi. Deep learning for network traffic monitoring and analysis (ntma): A survey. *Computer Communications*, 170:19–41, 2021. Kaushik Bhattacharya, Bamdad Hosseini, Nikola B Kovachki, and Andrew M Stuart. Model reduction and neural networks for parametric pdes. *The SMAI journal of computational mathematics*, 7:121–157, 2021. Yiang Chen, Dehao Yuan, Wanying Chen, Mingyun Hu, Jimmy CH Fung, Haochen Sun, and Xingcheng Lu. Estimation and variation analysis of secondary inorganic aerosols across the greater bay area in 2005 and 2015. *Chemosphere*, 292:133393, 2022. Zhuangbin Chen, Jinyang Liu, Wenwei Gu, Yuxin Su, and Michael R Lyu. Experience report: Deep learning-based system log analysis for anomaly detection. *arXiv preprint arXiv:2107.05908*, 2021. Corinna Cortes and Vladimir Vapnik. Support-vector networks. *Machine learning*, 20:273–297, 1995. Yueqi Duan, Yu Zheng, Jiwen Lu, Jie Zhou, and Qi Tian. Structural relational reasoning of point clouds. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 949–958, 2019. Yuwei Fan, Cindy Orozco Bohorquez, and Lexing Ying. Bcr-net: A neural network based on the nonstandard wavelet form. *Journal of Computational Physics*, 384:1–15, 2019. E Paxon Frady, Denis Kleyko, Christopher J Kynn, Bruno A Olshausen, and Friedrich T Sommer. Computing on functions using randomized vector representations. *arXiv preprint arXiv:2109.03429*, 2021. Guillermo Gallego, Tobi Delbrück, Garrick Orchard, Chiara Bartolozzi, Brian Taba, Andrea Censi, Stefan Leutenegger, Andrew J Davison, Jörg Conradt, Kostas Daniilidis, et al. Event-based vision: A survey. *IEEE transactions on pattern analysis and machine intelligence*, 44(1):154–180, 2020. Vignesh Gopakumar, Stanislas Pamela, Lorenzo Zanisi, Zongyi Li, Anima Anandkumar, and MAST Team. Fourier neural operator for plasma modelling. *arXiv preprint arXiv:2302.06542*, 2023. Paul Guerrero, Yanir Kleiman, Maks Ovsjanikov, and Niloy J Mitra. Pcpnet learning local shape properties from raw point clouds. In *Computer graphics forum*, volume 37, pp. 75–85. Wiley Online Library, 2018. John Guibas, Morteza Mardani, Zongyi Li, Andrew Tao, Anima Anandkumar, and Bryan Catanzaro. Adaptive fourier neural operators: Efficient token mixers for transformers. *arXiv preprint arXiv:2111.13587*, 2021. Yulan Guo, Hanyun Wang, Qingyong Hu, Hao Liu, Li Liu, and Mohammed Bennamoun. Deep learning for 3d point clouds: A survey. *IEEE transactions on pattern analysis and machine intelligence*, 43(12):4338–4364, 2020. Xian-Feng Han, Zhi-Ao Feng, Shi-Jie Sun, and Guo-Qiang Xiao. 3d point cloud descriptors: state-of-the-art. *Artificial Intelligence Review*, pp. 1–51, 2023. Thomas Hofmann, Bernhard Schölkopf, and Alexander J Smola. Kernel methods in machine learning. 2008.
RNgZTA4CTP
No intuition for the 2-step update when the theoretical assumptions are broken:** The paper leans heavily on asymptotic intuitions, but a lot of the wins in 4.3 and 4.4 seem to come from sample efficiency. Is there any intuition for this?
Best Possible Q-Learning Anonymous authors Paper under double-blind review Abstract Fully decentralized learning, where the global information, i.e., the actions of other agents, is inaccessible, is a fundamental challenge in cooperative multi-agent reinforcement learning. However, the convergence and optimality of most decentralized algorithms are not theoretically guaranteed, since the transition probabilities are non-stationary as all agents are updating policies simultaneously. To tackle this challenge, we propose best possible operator, a novel decentralized operator, and prove that the policies of agents will converge to the optimal joint policy if each agent independently updates its individual state-action value by the operator. Further, to make the update more efficient and practical, we simplify the operator and prove that the convergence and optimality still hold with the simplified one. By instantiating the simplified operator, the derived fully decentralized algorithm, best possible Q-learning (BQL), does not suffer from non-stationarity. Empirically, we show that BQL achieves remarkable improvement over baselines in a variety of cooperative multi-agent tasks. 1 Introduction Cooperative multi-agent reinforcement learning (MARL) trains a group of agents to maximize the cumulative shared reward, which has great significance for real-world applications, including logistics (Li et al., 2019), traffic signal control (Xu et al., 2021), power dispatch (Wang et al., 2021), and games (Vinyals et al., 2019). Although most existing methods follow the paradigm of centralized training and decentralized execution (CTDE), in many scenarios where the information of all agents is unavailable in the training period, each agent has to learn independently without centralized information. Thus, fully decentralized learning, where the agents can only use local experiences without the actions of other agents, is highly desirable (Jiang & Lu, 2022). However, in fully decentralized learning, as other agents are treated as a part of the environment and are updating their policies simultaneously, the transition probabilities from the perspective of individual agents will be non-stationary. Thus, the convergence of most decentralized algorithms, e.g., independent Q-learning (IQL) (Tan, 1993), is not theoretically guaranteed. Multi-agent alternate Q-learning (MA2QL) (Su et al., 2022) guarantees the convergence to a Nash equilibrium, but the converged equilibrium may not be the optimal one when there are multiple equilibria (Zhang et al., 2021a). Distributed IQL (Lauer & Riedmiller, 2000) and I2Q (Jiang & Lu, 2022) can learn the optimal joint policy, yet are limited to deterministic environments. How to guarantee the convergence of the optimal joint policy in stochastic environments remains open. To tackle this challenge, we propose best possible operator, a novel decentralized operator to update the individual state-action value of each agent, and prove that the policies of agents converge to the optimal joint policy under this operator. However, it is inefficient and thus impractical to perform best possible operator, because at each update it needs to compute the expected values of all possible transition probabilities and update the state-action value to be the maximal one. Therefore, we further propose simplified best possible operator. At each update, the simplified operator only computes the expected value of one of the possible transition probabilities and monotonically updates the state-action value. We prove that the policies of agents also converge to the optimal joint policy under the simplified operator. We respectively instantiate the simplified operator with Q-table for tabular cases and with neural networks for complex environments. In the Q-table instantiation, non-stationarity is instinctively avoided, and in the neural network instantiation, non-stationarity in the replay buffer is no longer a drawback, but a necessary condition for convergence. The proposed algorithm, **best possible Q-learning (BQL)**, is fully decentralized, without using the information of other agents. We evaluate BQL on a variety of multi-agent cooperative tasks, i.e., stochastic games, MPE-based differential games (Lowe et al., 2017), Multi-Agent MuJoCo (de Witt et al., 2020b), SMAC (Samvelyan et al., 2019), and GRF (Kurach et al., 2020), covering fully and partially observable, deterministic and stochastic, discrete and continuous environments. Empirically, BQL substantially outperforms baselines. To the best of our knowledge, BQL is the first decentralized algorithm that guarantees the convergence to the global optimum in stochastic environments. More simplifications and instantiations of **best possible operator** can be further explored. We believe BQL can be a new paradigm for fully decentralized learning. ## 2 METHOD ### 2.1 PRELIMINARIES Consider $N$-agent MDP $M_{\text{env}} = \langle S, O, A, R, P_{\text{env}}, \gamma \rangle$ with the state space $S$ and the joint action space $A$. Each agent $i$ chooses an individual action $a_i$, and the environment transitions to the next state $s'$ by taking the joint action $a$ with the transition probabilities $P_{\text{env}}(s'|s, a)$. For simplicity of theoretical analysis, we assume all agents obtain the state $s$, though in practice each agent $i$ can make decisions using local observation $o_i \in O$ or trajectory. All agents obtain a shared reward $r = R(s, s') \in [r_{\min}, r_{\max}]$ and learn to maximize the expected discounted return $\mathbb{E} \sum_{t=0}^{\infty} \gamma^t r_t$. In fully decentralized setting, $M_{\text{env}}$ is partially observable, since each agent $i$ only observes its own action $a_i$ instead of the joint action $a$. From the perspective of each agent $i$, there is an MDP $M_i = \langle S, A_i, R, P_i, \gamma \rangle$ with the individual action space $A_i$ and the transition probabilities $$P_i(s'|s, a_i) = \sum_{a_{-i}} P_{\text{env}}(s'|s, a_i, a_{-i}) \pi_{-i}(a_{-i}|s)$$ where $\pi_{-i}$ denotes the joint policy of all agents except agent $i$, similarly for $a_{-i}$. According to (1), the transition probabilities $P_i$ depend on the policies of other agents $\pi_{-i}$. As other agents are updating their policies continuously, $P_i$ becomes non-stationary. On the non-stationary transition probabilities, the convergence of independent Q-learning $$Q_i(s, a_i) = \mathbb{E}_{P_i(s'|s, a_i)} \left[ r + \gamma \max_{a'_i} Q_i(s', a'_i) \right]$$ is not guaranteed, and how to learn the optimal joint policy in fully decentralized settings is quite a challenge. In the next section, we propose **best possible operator**, a novel fully decentralized operator, which guarantees the convergence to the optimal joint policy in stochastic environments. ### 2.2 BEST POSSIBLE OPERATOR First, let us consider the optimal joint Q-value $$Q(s, a) = \mathbb{E}_{P_{\text{env}}(s'|s, a)} \left[ r + \gamma \max_{a'} Q(s', a') \right],$$ which is the expected return of the optimal joint policy $\pi^*(s) = \arg\max_a Q(s, a)$. Based on the optimal joint Q-value, for each agent $i$, we define $\max_{a_{-i}} Q(s, a_i, a_{-i})$, which follows the fixed point equation: $$\max_{a_{-i}} Q(s, a_i, a_{-i}) = \max_{a_{-i}} \mathbb{E}_{P_{\text{env}}(s'|s, a)} \left[ r + \gamma \max_{a'_i} \max_{a'_{-i}} Q(s, a'_i, a'_{-i}) \right]$$ $$= \mathbb{E}_{P_{\text{env}}(s'|s, a_i, \pi^*_{-i}(s, a_i))} \left[ r + \gamma \max_{a'_i} \max_{a'_{-i}} Q(s, a'_i, a'_{-i}) \right]$$ where $\pi^*_{-i}(s, a_i)$ is the optimal conditional joint policy of other agents given $a_i$. (4) is from taking $\max_{a_{-i}}$ on both sides of (3), and (5) is by folding $\pi^*_{-i}(s, a_i)$ into $P_{\text{env}}$. Then we have the following lemma. --- 1For simplicity, we refer to the optimal value $Q^*$ as $Q$ in this paper, unless stated otherwise. Lemma 1. If each agent \( i \) learns the independent value function \( Q_i(s, a_i) = \max_{a_{-i}} Q(s, a_i, a_{-i}) \), and takes actions as \( \arg\max_{a_i} Q_i(s, a_i) \), the agents will obtain the optimal joint policy when there is only one optimal joint policy. Proof. As \( \max_{a_i} \max_{a_{-i}} Q_i(s, a_i, a_{-i}) = \max_a Q(s, a) \) and there is only one optimal joint policy, \( \arg\max_{a_i} Q_i(s, a_i) \) is the action of agent \( i \) in the optimal joint action \( a \). According to Lemma 1, to obtain the optimal joint policy is to let each agent \( i \) learn the value function \( Q_i(s, a_i) = \max_{a_{-i}} Q(s, a_i, a_{-i}) \). To this end, we propose a new operator to update \( Q_i \) in a fully decentralized way: \[ Q_i(s, a_i) = \max_{P_i(\cdot|s, a_i)} \mathbb{E}_{P_i(s'|s, a_i)} \left[ r + \gamma \max_{a'_i} Q_i(s', a'_i) \right]. \] (6) Given \( s \) and \( a_i \), there will be numerous \( P_i(s'|s, a_i) \) due to different other agents’ policies \( \pi_{-i} \). To reduce the complexity, we only consider the deterministic policies, because when there is only one optimal joint policy, the optimal joint policy must be deterministic (Puterman, 1994). So the operator (6) takes the maximum only over the transition probabilities \( P_i(s'|s, a_i) \) under deterministic \( \pi_{-i} \). Intuitively, the operator continuously pursues the ‘best possible expected return’, until \( Q_i \) reaches the optimal expected return \( \max_{a_{-i}} Q(s, a_i, a_{-i}) \), so we name the operator (6) best possible operator. In the following, we theoretically prove that \( Q_i(s, a_i) \) converges to \( \max_{a_{-i}} Q(s, a_i, a_{-i}) \) under best possible operator, thus the agents learn the optimal joint policy. Let \( Q^k_i(s, a_i) \) denote the value function in the update \( k \) and \( Q_i^\infty(s, a_i) \). Then, we have the following lemma. Lemma 2. If \( Q_i^0 \) is initialized to be the minimal return \( \frac{\tau_{\min}}{1-\gamma} \), \( \max_{a_{-i}} Q(s, a_i, a_{-i}) \geq Q_i^k(s, a_i), \forall s, a_i, \forall k \), under best possible operator. Proof. We prove the lemma by induction. First, as \( Q_i^0 \) is initialized to be the minimal return, \( \max_{a_{-i}} Q(s, a_i, a_{-i}) \geq Q_i^0(s, a_i) \). Then, suppose \( \max_{a_{-i}} Q(s, a_i, a_{-i}) \geq Q_i^{k-1}(s, a_i), \forall s, a_i \). By denoting \( \arg\max_{P_i(s'|s, a_i)} \mathbb{E}_{P_i(s'|s, a_i)} \left[ r + \gamma \max_{a'_i} Q_i^{k-1}(s', a'_i) \right] \) as \( P_i^*(s'|s, a_i) \), we have \[ \max_{a_{-i}} Q(s, a_i, a_{-i}) - Q_i^k(s, a_i) \] \[ = \max_{a_{-i}} \sum_{s'} P_{\text{env}}(s'|s, a_i, a_{-i}) \left[ r + \gamma \max_{a'_i} \max_{a'_{-i}} Q(s', a'_i, a'_{-i}) \right] - \sum_{s'} P_i^*(s'|s, a_i) \left[ r + \gamma \max_{a'_i} Q_i^{k-1}(s', a'_i) \right] \] \[ \geq \sum_{s'} P_i^*(s'|s, a_i) \left[ r + \gamma \max_{a'_i} \max_{a'_{-i}} Q(s', a'_i, a'_{-i}) \right] - \sum_{s'} P_i^*(s'|s, a_i) \left[ r + \gamma \max_{a'_i} Q_i^{k-1}(s', a'_i) \right] \] \[ = \gamma \sum_{s'} P_i^*(s'|s, a_i) \left( \max_{a'_i} \max_{a'_{-i}} Q(s', a'_i, a'_{-i}) - \max_{a'_i} Q_i^{k-1}(s', a'_i) \right) \] \[ \geq \gamma \sum_{s'} P_i^*(s'|s, a_i) \left( \max_{a'_i} Q(s', a'_i, a'_{-i}) - Q_i^{k-1}(s', a'_i) \right) \geq 0, \] where \( a'^*_i = \arg\max_{a'_i} Q_i^{k-1}(s', a'_i) \). Thus, it holds in the update \( k \). By the principle of induction, the lemma holds for all updates. Intuitively, \( \max_{a_{-i}} Q(s, a_i, a_{-i}) \) is the optimal expected return after taking action \( a_i \), so it is the upper bound of \( Q_i(s, a_i) \). Further, based on Lemma 2, we have the following lemma. Lemma 3. \( Q_i(s, a_i) \) converges to \( \max_{a_{-i}} Q(s, a_i, a_{-i}) \) under best possible operator. Proof. For clear presentation, we use \( P_{\text{env}}(s'|s, a_i, \pi^*_{-i}) \) to denote \( P_{\text{env}}(s'|s, a_i, \pi^*_{-i}(s, a_i)) \). From (5) and (6), we have \[ \max_{a_{-i}} Q(s, a_i, a_{-i}) - Q_i^k(s, a_i) = \max_{s, a_i} \left( \sum_{s'} P_{\text{env}}(s'|s, a_i, \pi^*_{-i}) \left[ r + \gamma \max_{a'_i} \max_{a'_{-i}} Q(s', a'_i, a'_{-i}) \right] \right) \] \[ - \sum_{s'} P_i^*(s'|s, a_i) \left[ r + \gamma \max_{a'_i} Q_i^{k-1}(s', a'_i) \right] \leftarrow (\text{Lemma 2}) \] We can use the simple solution proposed in I2Q to deal with the limitation of only one joint policy, which is included in Appendix E. \[ \leq \max_{s,a_i} \left( \sum_{s'} P_{\text{env}}(s'|s,a_i,\pi^*_i) \left[ r + \gamma \max_{a'_i} Q(s',a'_i,a'_{-i}) \right] \right) \\ - \sum_{s'} P_{\text{env}}(s'|s,a_i,\pi^*_i) \left[ r + \gamma \max_{a'_i} Q^{k-1}_i(s',a'_i) \right) \] \[ \leq \gamma \max_{s',a'_i} \left( \max_{a'_{-i}} Q(s',a'_i,a'_{-i}) - Q^{k-1}_i(s',a'_i) \right) \] \[ = \gamma \left\| \max_{a'_{-i}} Q(s,a_i,a'_{-i}) - Q^{k-1}_i(s,a_i) \right\|_\infty. \] We have \( \left\| \max_{a'_{-i}} Q(s,a_i,a'_{-i}) - Q^{k}_i(s,a_i) \right\|_\infty \leq \gamma^k \left\| \max_{a'_{-i}} Q(s,a_i,a'_{-i}) - Q^{0}_i(s,a_i) \right\|_\infty \). Let \( k \to \infty \), then \( Q_i(s,a_i) \to \max_{a'_{-i}} Q(s,a_i,a'_{-i}) \), thus the lemma holds. According to Lemma 1 and 3, we immediately have: **Theorem 1.** The agents learn the optimal joint policy under best possible operator. ### 2.3 SIMPLIFIED BEST POSSIBLE OPERATOR Best possible operator guarantees the convergence to the optimal joint policy. However, to perform (6), every update, each agent \( i \) has to compute the expected values of all possible transition probabilities and update \( Q_i \) to be the maximal expected value, which is too costly. Therefore, we introduce an auxiliary value function \( Q^e_i(s,a_i) \), and simplify (6) into two operators. First, at each update, we randomly select one of possible transition probabilities \( \tilde{P}_i \) for each \( (s,a_i) \) and update \( Q^e_i(s,a_i) \) by \[ Q^e_i(s,a_i) = \mathbb{E}_{\tilde{P}_i(s'|s,a_i)} \left[ r + \gamma \max_{a'_i} Q_i(s',a'_i) \right]. \] \( Q^e_i(s,a_i) \) represents the expected value of the selected transition probabilities. Then we monotonically update \( Q_i(s,a_i) \) by \[ Q_i(s,a_i) = \max \left( Q_i(s,a_i), Q^e_i(s,a_i) \right). \] We define (7) and (8) together as simplified best possible operator. By performing simplified best possible operator, \( Q_i(s,a_i) \) is efficiently updated towards the maximal expected value. And we have the following lemma. **Lemma 4.** \( Q_i(s,a_i) \) converges to \( \max_{a'_{-i}} Q(s,a_i,a'_{-i}) \) under simplified best possible operator. **Proof.** According to (8), as \( Q_i(s,a_i) \) is monotonically increased, \( Q^k_i(s,a_i) \geq Q^{k-1}_i(s,a_i) \) in the update \( k \). Similar to the proof of Lemma 2, we can easily prove \( \max_{a'_{-i}} Q(s,a_i,a'_{-i}) \geq Q^k_i(s,a_i) \) under (7) and (8). Thus, \( \{Q^k_i(s,a_i)\} \) is an increasing sequence and bounded above. According to the monotone convergence theorem, \( \{Q^k_i(s,a_i)\} \) converges when \( k \to \infty \), and let \( Q_i(s,a_i) := Q^\infty_i(s,a_i) \). Then we prove that the converged value \( Q_i(s,a_i) \) is equal to \( \max_{a'_{-i}} Q(s,a_i,a'_{-i}) \). Due to monotonicity and convergence, \( \forall \epsilon > 0, \exists K \), when \( k > K \), \( Q^k_i(s,a_i) - Q^{k-1}_i(s,a_i) \leq \epsilon \), no matter which \( \tilde{P}_i \) is selected in the update \( k \). Since each \( \tilde{P}_i \) is possible to be selected, when selecting \( \tilde{P}_i(s'|s,a_i) = \arg \max_{P_i(s'|s,a_i)} \mathbb{E}_{P_i(s'|s,a_i)} \left[ r + \gamma \max_{a'_i} Q^{k-1}_i(s',a'_i) \right] = P_i(s'|s,a_i) \), by performing (7) and (8), we have \[ Q^{k-1}_i(s,a_i) + \epsilon \geq Q^k_i(s,a_i) \geq Q^e_i(s,a_i) = \sum_{s'} P^*_i(s'|s,a_i) \left[ r(s,s') + \gamma \max_{a'_i} Q^{k-1}_i(s',a'_i) \right]. \] According to the proof of Lemma 3, we have \[ \max_{s,a_i} \left( \max_{a'_{-i}} Q(s,a_i,a'_{-i}) - Q^e_i(s,a_i) \right) \leq \gamma \max_{s,a_i} \left( \max_{a'_{-i}} Q(s,a_i,a'_{-i}) - Q^{k-1}_i(s,a_i) \right). \] Use \( s^*, a^*_i \) to denote \[ \arg \max_{s,a_i} \left( \max_{a'_{-i}} Q(s,a_i,a'_{-i}) - Q^{k-1}_i(s,a_i) \right). \] Since \( Q_i^{k-1}(s, a_i) + \epsilon \geq Q_i^e(s, a_i) \), \[ \max_{a_{-i}} Q(s^*, a_i^*, a_{-i}) - Q_i^{k-1}(s^*, a_i^*) - \epsilon \leq \gamma \max_{a_{-i}} Q(s^*, a_i^*, a_{-i}) - \gamma Q_i^{k-1}(s^*, a_i^*). \] Then, we have \[ \left\| \max_{a_{-i}} Q(s, a_i, a_{-i}) - Q_i^{k-1}(s, a_i) \right\|_\infty \leq \frac{\epsilon}{1 - \gamma}. \] Thus, \( Q_i(s, a_i) \) converges to \( \max_{a_{-i}} Q(s, a_i, a_{-i}) \). According to Lemma 1 and 4, we also have: **Theorem 2.** The agents learn the optimal joint policy under simplified best possible operator. ### 2.4 Best Possible Q-Learning **Best possible Q-learning** (BQL) is instantiated on simplified best possible operator. We first consider learning Q-table for tabular cases. The key challenge is how to obtain all possible transition probabilities under deterministic \( \pi_{-i} \) during learning. To solve this issue, the whole training process is divided into \( M \) epochs. At the epoch \( m \), each agent \( i \) randomly and independently initializes a deterministic policy \( \tilde{\pi}_i^m \) and selects a subset of states \( S_i^m \). Then each agent \( i \) interacts with the environment using the deterministic policy \[ \begin{cases} \arg\max_{a_i} Q_i(s, a_i) & \text{if } s \notin S_i^m, \\ \tilde{\pi}_i^m(s) & \text{else}. \end{cases} \] Each agent \( i \) stores independent experiences \((s, a_i, s', r)\) in the replay buffer \( D_i^m \). As \( P_i \) depends on \( \pi_{-i} \) and agents act deterministic policies, \( D_i^m \) contains one \( P_i \) under a deterministic \( \pi_{-i} \). Since \( P_i \) will change if other agents modify their policies \( \pi_{-i} \), acting the randomly initialized policy \( \tilde{\pi}_i^m \) on \( S_i^m \) in the epoch \( m \) not only helps each agent \( i \) to explore state-action pairs, but also helps other agents to explore possible transition probabilities. When \( M \) is sufficiently large, given any \((s, a_i)\) pair, any \( P_i(s, a_i) \) can be found in a replay buffer. After interaction of the epoch \( m \), each agent \( i \) has a buffer series \( \{D_i^1, \cdots, D_i^m\} \), each of which has different transition probabilities. At training period of the epoch \( m \), each agent \( i \) randomly selects one replay buffer \( D_i^j \) from \( \{D_i^1, \cdots, D_i^m\} \) and samples mini-batches \(\{s, a_i, s', r\}\) from \( D_i^j \) to update Q-table \( Q_i^e(s, a_i) \) by (7), and then samples mini-batches from \( D_i^j \) to update \( Q_i(s, a_i) \) by (8). The Q-table implementation is summarized in Algorithm 1 (Appendix A). The sample efficiency of collecting the buffer series seems to be a limitation of BQL, and we further analyze it. Simplified best possible operator requires that any possible \( P_i(s, a_i) \) of \((s, a_i)\) pair can be found in one buffer, but does not care about the relationship between transition probabilities of different state-action pairs in the same buffer. So BQL ideally needs only \( |A_i| \times |A_{-i}| = |A| \) small buffers to cover all possible \( P_i \) for any \((s, a_i)\) pair, which is very efficient for experience collection. We give an intuitive illustration for this and analyze that BQL has similar sample complexity to the joint Q-learning (3) in Appendix C. In complex environments with large or continuous state-action space, it is inefficient and costly to follow the experience collection in tabular cases, where the agents cannot update their policies during the interaction of each epoch and each epoch requires adequate samples to accurately estimate the expectation (7). Thus, in complex environments, same as IQL, each agent \( i \) only maintains one replay buffer \( D_i \), which contains all historical experiences, and uses the same \( \epsilon \)-greedy policy as IQL (without the randomly initialized deterministic policy \( \tilde{\pi}_i \)). Then we instantiate simplified best possible operator with neural networks \( Q_i \) and \( Q_i^e \). \( Q_i^e \) is updated by minimizing: \[ \mathbb{E}_{s, a_i, s', r \sim D_i} \left[ (Q_i^e(s, a_i) - r - \gamma Q_i(s', a_i^*))^2 \right], \quad a_i^* = \arg\max_{a_i} Q_i(s', a_i). \] And \( Q_i \) is updated by minimizing: \[ \mathbb{E}_{s, a_i \sim D_i} \left[ w(s, a_i) \left( Q_i(s, a_i) - \bar{Q}_i^e(s, a_i) \right)^2 \right], \quad w(s, a_i) = \begin{cases} 1 & \text{if } Q_i^e(s, a_i) > Q_i(s, a_i) \\ \lambda & \text{else}. \end{cases} \] \( \tilde{Q}_i^e \) is the softly updated target network of \( Q_i^e \). When \( \lambda = 0 \), (10) is equivalent to (8). However, when \( \lambda = 0 \), the positive random noise of \( Q_i \) in the update can be continuously accumulated, which may cause value overestimation. So we adopt the weighted max in (10) by setting \( 0 < \lambda < 1 \) to offset the positive random noise. In continuous action space, following DDPG (Lillicrap et al., 2016), we train a policy network \( \pi_i(s) \) by maximizing \( Q_i(s, \pi_i(s)) \) as a substitute of \( \arg\max_{a_i} Q_i(s, a_i) \). The neural network implementation is summarized in Algorithm 2 (Appendix A). Simplified best possible operator is meaningful for neural network implementation. As there is only one buffer \( D_i \), we cannot perform (6) but can still perform (7) and (8) on \( D_i \). As other agents are updating their policies, the transition probabilities in \( D_i \) will continuously change. If \( D_i \) sufficiently goes through all possible transition probabilities, \( Q_i(s, a_i) \) converges to \( \max_{a_{-i}} Q(s, a_i, a_{-i}) \) and the agents learn the optimal joint policy. That is to say, non-stationarity in the replay buffer is no longer a drawback, but a necessary condition for BQL. 3 RELATED WORK Most existing MARL methods (Lowe et al., 2017; Iqbal & Sha, 2019; Wang et al., 2020; Zhang et al., 2021b; Su & Lu, 2022; Peng et al., 2021; Li et al., 2022; Sunehag et al., 2018; Rashid et al., 2018; Son et al., 2019) follow centralized training and decentralized execution (CTDE), where the information of all agents can be accessed in a centralized way during training. Unlike these methods, we focus on fully decentralized learning where global information is not available. The most straightforward decentralized methods, i.e., independent Q-learning (Tan, 1993) and independent PPO (IPPO) (de Witt et al., 2020a), cannot guarantee the convergence of the learned policy, because the transition probabilities are non-stationary from the perspective of each agent as all agents are learning policies simultaneously. Multi-agent alternate Q-learning (MA2QL) (Su et al., 2022) guarantees the convergence to a Nash equilibrium, but the converged equilibrium may not be the optimal one when there are multiple Nash equilibria. Moreover, to obtain the theoretical guarantee, it has to be trained in an on-policy manner and cannot use replay buffers, which leads to poor sample efficiency. Following the principle of optimistic estimation, Hysteretic IQL (Matignon et al., 2007) sets a slow learning rate to the value punishment. Distributed IQL (Lauer & Riedmiller, 2000), a special case of Hysteretic IQL with the slow learning rate being zero, guarantees the convergence to the optimum but only in deterministic environments. I2Q (Jiang & Lu, 2022) lets each agent perform independent Q-learning on ideal transition probabilities and could learn the optimal policy only in deterministic environments. Our BQL is the first fully decentralized algorithm that converges to the optimal joint policy in stochastic environments. In the next section, we compare BQL against these Q-learning variants (Distributed IQL is included in Hysteretic IQL). Comparing with on-policy algorithms, e.g., IPPO, that are not sample-efficient especially in fully decentralized settings, is out of focus and thus deferred to Appendix. Decentralized methods with communication (Zhang et al., 2018; Konan et al., 2021; Li & He, 2020) allow information sharing with neighboring agents according to a communication channel. However, they do not follow the fully decentralized setting and thus are beyond the scope of this paper. 4 EXPERIMENTS We first test BQL with Q-table on randomly generated cooperative stochastic games to verify its convergence and optimality. Then, to illustrate its performance on complex tasks, we compare BQL with neural networks against Q-learning variants on MPE-version differential games (Jiang & Lu, 2022), Multi-Agent MuJoCo (Peng et al., 2021), SMAC (Samvelyan et al., 2019), and GRF (Kurach et al., 2020). The experiments cover both fully and partially observable, deterministic and stochastic, discrete and continuous environments. Since we consider the fully decentralized setting, BQL and the baselines do not use parameter sharing. The results are presented using mean and standard. More details about hyperparameters are available in Appendix F. 4.1 STOCHASTIC GAMES To support the theoretical analysis of BQL, we test the Q-table instantiation on stochastic games with 4 agents, 30 states, and infinite horizon. The action space of each agent is 4, so the joint action space $|A| = 256$. The distribution of initial states is uniform. Each state will transition to any state given a joint action according to transition probabilities. The transition probabilities and reward function are randomly generated and fixed in each game. We randomly generate 20 games and train the agents for four different seeds in each game. The mean normalized return and std over the 20 games are shown in Figure 1a. IQL cannot learn the optimal policies due to non-stationarity. Although using the optimistic update to remedy the non-stationarity, Hysteric IQL (H-IQL) still cannot solve this problem in stochastic environments and shows similar performance to IQL. In Appendix B, we thoroughly analyze the difference and relationship between H-IQL and BQL. I2Q performs Q-learning on the ideal transition function where the next state is deterministically the one with the highest value, which however is impossible in stochastic tasks. So I2Q cannot guarantee the optimal joint policy in stochastic environments. MA2QL guarantees the convergence to a Nash equilibrium, but the converged one may not be the optimal one, thus there is a performance gap between MA2QL and optimal policies. BQL could converge to the optimum, and the tiny gap is caused by the fitting error of the Q-table update. This verifies our theoretical analysis. Note that, in Q-table instantiations, MA2QL and BQL use different experience collection from IQL, i.e., exploration strategy and replay buffer. MA2QL only uses on-policy experiences and BQL collects a series of small buffers. However, for sample efficiency, the two methods have to use the same experience collection as IQL in complex tasks with neural networks. MA2QL- and BQL- respectively denote the two methods with the same experience collection as IQL. Trained on off-policy experiences, MA2QL- suffers from non-stationarity and achieves similar performance to IQL. Even if using only one buffer, as we have analyzed in Section 2.4, if the non-stationary buffer sufficiently goes through all possible transition probabilities, BQL agents can also converge to the optimum. Although going through all possible transition probabilities by one buffer is inefficient, BQL significantly outperforms IQL, which implies the potential of BQL with one buffer in complex tasks. Figure 1b shows the effect of the size of buffer $D_i^m$ at the epoch $m$. If $|D_i^m|$ is too small, i.e., 200, the experiences in $D_i^m$ are insufficient to accurately estimate the expected value (7). If $|D_i^m|$ is too large, i.e., 10000, the experiences in $D_i^m$ are redundant, and the buffer series is difficult to cover all possible transition probabilities given fixed total training timesteps. Figure 1c shows the effect of the number of states on which the agents perform the randomly initialized deterministic policy $\hat{\pi}_i^m$ for exploration. The larger $|S_i^m|$ means a stronger exploration for both state-action pairs and possible transition probabilities, which leads to better performance. We then consider a one-stage game that is wildly adopted in MARL (Son et al., 2019). There are 2 agents, and the action space of each agent is 3. The reward matrix is $$\begin{array}{ccc} a_1/a_2 & A^{(1)} & A^{(2)} & A^{(3)} \\ A^{(1)} & 8 & -12 & -12 \\ A^{(2)} & -12 & 0 & 0 \\ A^{(3)} & -12 & 0 & 0 \\ \end{array}$$ where the reward 8 is the global optimum and the reward 0 is the sub-optimal Nash equilibrium. As shown in Figure 1d, MA2QL converges to the sub-optimal Nash equilibrium when the initial policy of the second agent selects $A^{(2)}$ or $A^{(3)}$. But BQL converges to the global optimum easily. ### 4.2 MPE To evaluate the effectiveness of BQL with neural network implementation, we adopt the 3-agent MPE-based differential game used in I2Q (Jiang & Lu, 2022), where 3 agents can move in the range $[-1, 1]$. Different from the original deterministic version, we add stochasticity to it. In each timestep, agent $i$ acts the action $a_i \in [-1, 1]$, and the position of agent $i$ will be updated as $x_i =$ clip(x_t + 0.1 \times a_t, -1, 1) (i.e., the updated position is clipped to \([-1, 1]\)) with the probability \(1 - \beta\), or will be updated as \(-x_t\) with the probability \(\beta\). \(\beta\) controls the stochasticity. The state is the vector of positions \(\{x_1, x_2, x_3\}\). The reward function of each timestep is \[ r = \begin{cases} 0.5 \cos(4l\pi) + 0.5 & \text{if } l \leq 0.25 \\ 0 & \text{if } 0.25 < l \leq 0.6 \\ 0.15 \cos(5\pi(l - 0.8)) + 0.15 & \text{if } 0.6 < l \leq 1.0 \\ 0 & \text{if } l > 1.0 \end{cases}, \quad l = \sqrt{\frac{2}{3}(x_1^2 + x_2^2 + x_3^2)} \] We visualize the relation between \(r\) and \(l\) in Figure 12. There is only one global optimum \((l = 0\) and \(r = 1)\) but infinite sub-optima \((l = 0.8\) and \(r = 0.3)\), and the narrow region with \(r > 0.3\) is surrounded by the region with \(r = 0\). So it is quite a challenge to learn the optimal policies in a fully decentralized way. Each episode contains 100 timesteps, and the initial positions follow the uniform distribution. We perform experiments with different stochasticities \(\beta\), and train the agents for eight seeds with each \(\beta\). In continuous environments, BQL and baselines are built on DDPG. As shown in Figure 2, IQL always falls into the local optimum (total reward \(\approx 30\)) because of the non-stationary transition probabilities. H-IQL only escapes the local optimum in one seed in the setting with \(\beta = 0.3\). According to the theoretical analysis in I2Q paper, the value estimation error of I2Q will become larger when stochasticity grows, which is the reason why I2Q shows poor performance with \(\beta = 0.4\) and 0.5. In neural network implementations, MA2QL and BQL use the same experience collection as IQL, so there is no MA2QL- and BQL-. MA2QL converges to the local optimum because it cannot guarantee that the converged equilibrium is the global optimum, especially trained using off-policy data. BQL (\(\lambda = 0.01\)) can escape from local optimum in more than 4 seeds in all settings, which demonstrates the effectiveness of our optimization objectives (9) and (10). The difference between global optimum (total reward \(\approx 100\)) and local optimum is large, which results in the large variance of BQL. In the objective (10), \(\lambda\) controls the balance between performing best possible operator and offsetting the overestimation caused by the operator. As shown in Figure 2, the large \(\lambda\), i.e., 0.1, will weaken the strength of BQL, while too small \(\lambda\), i.e., 0, will cause severe overestimation and destroy the performance. ### 4.3 Multi-Agent MuJoCo To evaluate BQL in partially observable environments, we adopt Multi-Agent MuJoCo (Peng et al., 2021), where each agent independently controls one or some joints of the robot. In each task, we test four random seeds and plot the learning curves in Figure 3. Here, we set \(\lambda = 0.5\). In the first three tasks, each agent can only observe the state of its own joints and bodies (with the parameter agent_obsk = 0). BQL achieves higher reward or learns faster than the baselines, which verifies that BQL could be applied to partially observable environments. In partially observable environments, BQL is performed on transition probabilities of observation \(P_t(o'_t|o_t, a_t)\), which also depends on \(\pi_{-i}\). The convergence and optimality of BQL can only be guaranteed when one observation \(o_i\) uniquely corresponds to one state \(s\). It has been proven that the optimality is undecidable in partially observable Markov decision processes (Madani et al., 1999), so it is not the limitation of BQL. In the first three tasks, we only consider two-agent cases in the partially observable setting, because the too limited observation range cannot support strong policies when there are more agents. We also test BQL on 17-agent Humanoid with full observation in Figure 3d. BQL obtains significant performance gain in this many-agent task, which can be evidence of the good scalability of BQL. 4.4 SMAC and Google Research Football We also perform experiments on partially observable and stochastic SMAC tasks (Samvelyan et al., 2019) with the version SC2.4.10, including both easy and hard maps (Yu et al., 2021). Agent numbers vary between 2 and 9. We build BQL on the implementation of PyMARL (Samvelyan et al., 2019) and train the agents for four random seeds. The learning curves are shown in Figure 4. In general, BQL outperforms the baselines, which verifies that BQL can also obtain performance gain in high-dimensional complex tasks. In 2c_vs_64zg, by considering the non-stationary transition probabilities, BQL and I2Q achieve significant improvement over other methods. We conjecture that the interplay between agents is strong in this task. GRF (Kurach et al., 2020) is a physics-based 3D simulator where agents aim to master playing football. We select two academy tasks with sparse rewards: 3_vs_1 with keeper (3 agents) and counterattack easy (4 agents). We build BQL on the implementation of PyMARL2 (Hu et al., 2021) and train the agents for four random seeds. Although I2Q shows similar results with BQL in some SMAC tasks, BQL can outperform I2Q in GRF as shown in Figure 5a and 5b, because GRF is more stochastic than SMAC and the value gap of I2Q will enlarge along with the increase of stochasticity. 4.5 Hyperparameter $\lambda$ We further investigate the effectiveness of $\lambda$ in Multi-Agent MuJoCo and SMAC. In the objective (10), $\lambda$ controls the balance between performing best possible operator and offsetting the overestimation caused by the operator. As shown in Figure 5c and 5d, too large $\lambda$ will weaken the strength of BQL. When $\lambda = 1.0$, BQL degenerates into IQL. Too small $\lambda$, i.e., 0, will cause overestimation. If the environment is more complex, e.g., SMAC, overestimation is more likely to occur, so we should set a large $\lambda$. In $2 \times 3$ Swimmer, when $\lambda$ falls within the interval $[0.2, 0.8]$, BQL can obtain performance gain, showing the robustness to $\lambda$. 5 Conclusion We propose best possible operator and theoretically prove that the policies of agents will converge to the optimal joint policy if each agent independently updates its individual state-action value by the operator. We then simplify the operator and derive BQL, the first decentralized MARL algorithm that guarantees the convergence to the global optimum in stochastic environments. Empirically, BQL outperforms baselines in a variety of multi-agent tasks. We also discuss the limitation of unique optimal joint policy and sample efficiency. REFERENCES Joshua Achiam. Spinning Up in Deep Reinforcement Learning. 2018. Christian Schroeder de Witt, Tarun Gupta, Denys Makoviichuk, Viktor Makoviychuk, Philip HS Torr, Mingfei Sun, and Shimon Whiteson. Is Independent Learning All You Need in The StarCraft Multi-Agent Challenge? arXiv preprint arXiv:2011.09533, 2020a. Christian Schroeder de Witt, Bei Peng, Pierre-Alexandre Kamienny, Philip Torr, Wendelin Böhmer, and Shimon Whiteson. Deep Multi-Agent Reinforcement Learning for Decentralized Continuous Cooperative Control. arXiv preprint arXiv:2003.06709, 2020b. Jian Hu, Siyang Jiang, Seth Austin Harding, Haibin Wu, and Shih-wei Liao. Rethinking the implementation tricks and monotonicity constraint in cooperative multi-agent reinforcement learning. arXiv e-prints, pp. arXiv–2102, 2021. Shariq Iqbal and Fei Sha. Actor-Attention-Critic for Multi-Agent Reinforcement Learning. In International Conference on Machine Learning (ICML), 2019. Jiechuan Jiang and Zongqing Lu. I2q: A fully decentralized q-learning algorithm. In Advances in Neural Information Processing Systems (NeurIPS), 2022. Sachin G Konan, Esmaeil Seraj, and Matthew Gombolay. Iterated reasoning with mutual information in cooperative and byzantine decentralized teaming. In International Conference on Learning Representations (ICLR), 2021. Karol Kurach, Anton Raichuk, Piotr Stanczyk, Michal Zajkac, Olivier Bachem, Lasse Espeholt, Carlos Riquelme, Damien Vincent, Marcin Michalski, Olivier Bousquet, et al. Google research football: A novel reinforcement learning environment. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2020. Martin Lauer and Martin Riedmiller. An algorithm for distributed reinforcement learning in cooperative multi-agent systems. In International Conference on Machine Learning (ICML), 2000. Hepeng Li and Haibo He. Multi-agent trust region policy optimization. arXiv preprint arXiv:2010.07916, 2020. Xihan Li, Jia Zhang, Jiang Bian, Yunhai Tong, and Tie-Yan Liu. A cooperative multi-agent reinforcement learning framework for resource balancing in complex logistics network. In International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2019. Yueheng Li, Guangming Xie, and Zongqing Lu. Difference advantage estimation for multi-agent policy gradients. In International Conference on Machine Learning (ICML), 2022. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. In International Conference on Learning Representations (ICLR), 2016. Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, and Igor Mordatch. Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments. Neural Information Processing Systems (NeurIPS), 2017. Omid Madani, Steve Hanks, and Anne Condon. On the undecidability of probabilistic planning and infinite-horizon partially observable markov decision problems. In AAAI/IAAI, 1999. Laëtitia Matignon, Guillaume J Laurent, and Nadine Le Fort-Piat. Hysteretic q-learning: an algorithm for decentralized reinforcement learning in cooperative multi-agent teams. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2007. Bei Peng, Tabish Rashid, Christian Schroeder de Witt, Pierre-Alexandre Kamienny, Philip Torr, Wendelin Böhmer, and Shimon Whiteson. Facmac: Factored multi-agent centralised policy gradients. Advances in Neural Information Processing Systems (NeurIPS), 2021. Martin L Puterman. Markov decision processes: Discrete stochastic dynamic programming, 1994.
Zw8YxUWL4R
Overall, I feel the step-wise operation is worth further investigating. To be more specific, my concern is that the statement/the proposed empirical finding “the coarse inner layers of U-Net affecting the shape of the generated image, and the outer layers affecting the style and appearance” may be inaccurate, which is more likely to be a fact of diffusion steps rather than the U-Net architecture.
P+: Extended Textual Conditioning in Text-to-Image Generation Anonymous authors Paper under double-blind review Abstract We introduce an Extended Textual Conditioning space in text-to-image diffusion models, referred to as $P_+$. This space consists of multiple textual conditions, derived from per-layer prompts, each corresponding to a cross-attention layer of the denoising U-net of the diffusion model. We show that the extended space provides greater control over the synthesis process. We further introduce Extended Textual Inversion (XTI), which inverts concepts into $P_+$, such that they are represented with per-layer tokens. We show that XTI is more expressive and precise, and converges faster than the original Textual Inversion (TI) space. Compared to baselines, XTI achieves much better reconstruction and editability without the need to balance these two goals. We conduct a series of extensive experiments to analyze and understand the properties of the new space, and to showcase the effectiveness of our method for personalizing text-to-image models. Furthermore, we utilize the unique properties of this space to achieve previously unattainable results in object-style mixing using text-to-image models. 1 Introduction Neural generative models have advanced the field of image synthesis, allowing us to create incredibly expressive and diverse images. Yet, recent breakthroughs in text-to-image models based on large language-image models have taken this field to new heights and stunned us with their ability to generate images from textual descriptions, providing a powerful tool for creative expression, visualization, and design. Recent Text-to-Image models typically use a revers diffusion process to generate images from noisy tensors, performed with a U-Net denoiser model. The network uses multiple cross-attention layers, at different resolutions, to inject information from a conditioning text prompt. Figure 1(left) shows the common text-conditioning flow: the textual prompt embedding $p$, element of the space that we denote as $P$, is passed to multiple cross-attention layers of the denoising U-net model. In this paper, we introduce the Extended Textual Conditioning space. This space, referred to as $P_+$ space, consists of $n$ textual conditions $\{p_1, p_2, \ldots, p_n\}$, where each $p_i$ is injected to the corresponding $i$-th cross-attention layer in the U-net (see Figure 1(right)). We show that $P_+$ space is more expressive, disentangled, and provides better control on the synthesized image. As will be analyzed in this paper, different layers tend to control different aspects of the synthesized image. In particular, the coarse layers primarily affect the structure of the image, while the fine layers predominantly influence its appearance. Figure 1: $P$ vs. $P_+$. Standard textual conditioning, where a single text embedding is injected to the network (left), vs. our proposed extended conditioning, where different embeddings are injected into different layers of the U-net (right). Figure 2: **Shape-Style Mixing in XTI.** The extended textual space allows subjects mixing conducted by two separate extended textual inversions (XTIs). The inversion of the kitten (right) is injected to the coarse inner layers of the U-net, affecting the shape of the generated image, and the inversion of the cup (left) is injected to the outer layers, affecting the style and appearance. Our Extended Textual Conditioning space paves the way to a particularly exciting advancement in the domain of personalization of text-to-image models [Gal et al., 2022; Ruiz et al., 2022], where the model learns to reproduce a specific subject depicted on a few input images, in different contexts. This inversion process results in a new conditioning token that represents the subject. Then it can be employed in a text prompt to produce diverse and novel images where the subject appears in a new context. To this end, we introduce Extended Textual Inversion (XTI), where a subject portrayed by a few images is represented as a set of token embeddings, one per layer. Our findings reveal that our results indicate that the optimized embeddings in XTI not only converge faster compared to those in the textual inversion baseline, but they also enhance the reconstruction quality without sacrificing the ability to edit. Furthermore, we leverage the distinctive characteristics of $P^+$ to advance the state-of-the-art object-appearance mixing through text-to-image generation. Specifically, we employ the insertion of inverted tokens of diverse subjects into the different layers to capitalize on the inherent shape-style disentanglement exhibited by these layers. This approach enables us to achieve previously unattainable results as shown in Figure 2. In summary, the contributions of our paper are: 1. We introduce $P^+$, the Extended Textual Conditioning space, which is represented by a per-layer token embedding. $P^+$ is more expressive and disentangled, allowing for better control over different aspects of the synthesized image’s structure and appearance. 2. We propose the Extended Textual Inversion (XTI) method to represent a subject in $P^+$ using a set of token embeddings, improving convergence speed and reconstruction quality (compared to Textual Inversion) without sacrificing editability. 3. We demonstrate previously unattainable results of object-appearance mixing through text-to-image generation. 2 RELATED WORKS 2.1 EXTENDED SPACES IN GENERATIVE MODELS Exploring neural sub-spaces in generative models has been extensively explored, most notably in StyleGAN [Karras et al., 2020, 2019]. The extended textual conditioning $P^+$ is reminiscent of StyleGAN’s extended latent space [Abdal et al., 2019; 2020], also commonly referred to as $W^+$. Similar to $W^+$, $P^+$ is significantly more expressive, where instead of a single code shared by all layers, there is one per layer. However, while $W^+$ is an extended latent space, here the extended space relates to the textual conditions used by the network. It should be noted, though, that while \( W^+ \) is expressive, the extended code is less editable [Tov et al., 2021]. In contrast, \( P^+ \) remains practically as editable as \( P \). In addition, other sub-spaces lay within deeper and more disentangled layers; Wu et al. (2021) have been explored and exploited in various editing and synthesis applications [Bermano et al., 2022]. In the case of text-to-image diffusion models, the denoising U-net, which is the core model of most of the text-to-image diffusion models, is usually conditioned by text prompts via a set of cross-attention layers [Ramesh et al., 2022; Rombach et al., 2021; Saharia et al., 2022]. In many neural architectures, different layers are responsible for different abstraction levels [Bau et al., 2020; Karras et al., 2019; Voynov & Babenko, 2020; Zeiler & Fergus, 2014; Ghiasi et al., 2022]. It is natural to anticipate that the diffusion denoising U-Net backbone operates in a similar manner, with different textual descriptions and attributes proving beneficial at different layers. ### 2.2 Text-Driven Editing Recently there has been a significant advancement in generating images based on textual inputs [Chang et al., 2023; Ramesh et al., 2022; Rombach et al., 2021; Saharia et al., 2022], where most of them exploit the powerful architecture of diffusion models [Ho et al., 2020; Rombach et al., 2021; Sohl-Dickstein et al., 2015; Song et al., 2020; Song & Ermon, 2019]. In particular, recent works have attempted to adapt text-guided diffusion models to the fundamental problem of single-image editing, aiming to exploit their rich and diverse semantic knowledge of this generative prior. In a pioneering attempt, Meng et al. [Meng et al., 2021] add noise to the input image and then perform a denoising process from a predefined step. Yet, they struggle to accurately preserve the input image details, which were preserved by a user provided mask in other works [Avrahami et al., 2022b,a; Nichol et al., 2021]. DiffEdit [Couairon et al., 2022] employs DDIM inversion for image editing, but to prevent any resulting distortion, it generates a mask automatically that allows background preservation. Text-only editing approaches split into approach that supports global editing [Crowson et al., 2022; Kim & Ye, 2021; Kwon & Ye, 2021; Patashnik et al., 2021; Liew et al., 2022], and local editing [Bar-Tal et al., 2022; Wang et al., 2022]. Prompt-to-prompt [Hertz et al., 2022] introduces an intuitive editing technique that enables the manipulation of local or global details by injecting internal cross-attention maps. To allow prompt-to-prompt to be applied to real images, Null-Text Inversion [Mokady et al., 2022] is proposed as means to invert real images into the latent space of the diffusion model. Imagic [Kawar et al., 2022a] and UniTune [Valevski et al., 2022] have demonstrated impressive text-driven editing capabilities, but require the costly fine-tuning of the model. Instruct-Pix2Pix [Brooks et al., 2023], Plug-and-Play [Tumanyan et al., 2022], and pix2pix-zero [Parmar et al., 2023] allow users to input an instruction or target prompt and manipulate real images accordingly. ### 2.3 Personalization Synthesizing particular concepts or subjects which are not widespread in the training data is a challenging task. This requires an inversion process that given input images would enable regenerating the depicted object using a generative model. Inversion has been studied extensively for GANs [Bermano et al., 2022; Creswell & Bharath, 2018; Lipton & Tripathi, 2017; Xia et al., 2021; Yeh et al., 2017; Zhu et al., 2016], ranging from latent-based optimization [Abdal et al., 2019, 2020] and encoders [Richardson et al., 2020; Tov et al., 2021] to feature space encoders [Wang et al., 2021] and fine-tuning of the model [Alaúf et al., 2021; Roich et al., 2022; Nitzan et al., 2022]. The notion of personalization of text-to-image models has been shown to be a powerful technique. Personalization of models [Kumari et al., 2023; Ruiz et al., 2022] in general, or of text tokens only [Gal et al., 2022] has quickly been adapted for various applications [Kawar et al., 2022b; Lin et al., 2022]. In addition to their high computational cost, current methods face a clear trade-off between learning tokens that accurately capture concepts vs. avoidance of overfitting. This can result in learned tokens that are overly tuned to the input images, thus limiting their ability to generalize to new contexts or generate novel variations of the concept. Similar to Textual Inversion (TI), our approach does not require any fine-tuning or modification of the weights, thus, reduces the risk of overfitting and degrading the editability capabilities. In contrast, our inversion process into \( P^+ \) is both faster and more precise, thanks to the greater number of tokens that improve reconstruction capabilities without sacrificing editability. Figure 3: **Per-layer Prompting**. We provide different text prompts (a precursor to \( P+ \)) to different cross-attention layers in the denoising U-net. We see that color ("red", "green") is determined by the fine outer layers and content ("cube", "lizard") is determined by the coarse inner layers. ### 3 EXTENDED CONDITIONING SPACE To engage the reader, we begin with a simple experiment on the publicly available Stable Diffusion model [Rombach et al., 2022]. We partitioned the cross-attention layers of the denoising U-net into two subsets: coarse layers with low spatial resolution and fine layers with high spatial resolution. We then used two conditioning prompts: "red cube" and "green lizard", and injected one prompt into one subset of cross-attention layers, while injecting the second prompt into the other subset. The resulting generated images are provided in Figure 3. Notably, in the first run, the model generates a red lizard, by taking the subject from the coarse layers’ text conditioning, and appearance from the fine layers’ conditioning. Similarly, in the second run, it generates the green cube, once again taking the appearance from the fine layers and the subject from the coarse layers. This experiment suggests that the conditioning mechanism at different resolutions processes prompts differently, with different attributes exerting greater influence at different levels. With this in mind, our work aims to further explore this phenomenon and its potential applications. In the following parts, we present the Extended Textual Conditioning space (\( P+ \)), outlining its principal attributes. We then introduce Extended Textual Inversion (XTI), demonstrating how \( P+ \) can be leveraged to improve the trade-off between reconstruction and editability in the original Textual Inversion approach. #### 3.1 \( P+ \) DEFINITION Let \( P \) denote the textual-conditioning space. \( P \) refers to the space of token embeddings that are passed into the text encoder in a text-to-image diffusion model. To clarify the definition of this space, we provide a brief overview of the process that a given text prompt undergoes in the model before being injected into the denoising network. Initially, the text tokenizer splits an input sentence into tokens, with a special token marking the end of the sentence (EOS). Each token corresponds to a pre-trained embedding that is retrieved from the embedding lookup table. Subsequently, these embeddings are concatenated and passed through a pre-trained text encoder, then injected to the cross-attention layers of the U-net model. In our work, we define \( P \) as the set of individual token embeddings that are passed to the text encoder. The process of injecting a text prompt into the network for a particular cross-attention layer is illustrated in Figure 4. We next present the *Extended Textual Conditioning* space, denoted by \( P+ \), which is defined as: \[ P+ := \{ p_1, p_2, ..., p_n \}, \] where \( p_i \in P \) represents an individual token embedding corresponding to the \( i \)-th cross-attention layer in the denoising U-net. Figure 4 illustrates the conceptual difference between the two spaces, \( P \) (left) and \( P+ \) (right). With the definition of the new space, our diffusion model, previously conditioned on a single prompt, can now synthesize images conditioned on a series of prompts, each associated with the correspondent cross-attention layer. Figure 4: Text-conditioning mechanism of a denoising diffusion model. The prompt "a cat" is processed with a sentence tokenization by a pretrained textual encoder, and fed into a cross-attention layer. Each of the three bars on the left represent a token embedding in $P$. As $P$ is a subspace of $P^+$, we naturally inquire about the advantages of synthesizing in the extended space. In Section 4, we present an analysis of the properties of the new space, which showcases a higher degree of control over various attributes. Specifically, different layers are found to dominate different attributes, such as style, color, and structure. We continue analysis in the supplementary in Section A.1. A notable benefit of this space is its potential for enhancing textual inversion. We next demonstrate how the extended space can be utilized to represent subjects with greater fidelity while maintaining the capability for editing. 3.2 Extended Textual Inversion (XTI) Given a set of images $\mathcal{I} = \{I_1, \ldots, I_k\}$ of a specific concept, the goal of the Textual Inversion (TI) operation Gal et al. (2022) is to find a representation of the concept in the conditioning space $P$. They add a new textual token, associated with the concept, to the tokenizer model (see Figure 4, left part). This new token corresponds to an optimizable token embedding $e \in P$ processed by the textual encoder. This embedding is optimized with respect to the standard diffusion denoising loss $L_{TI}$ for images sampled from $\mathcal{I}$ and then used to reproduce the concept. To motivate the need for an extended space, we start with the following experiment. Given the newly added embedding $e$, we calculate the contribution of each of the cross-attention layers to the gradients of the learned embedding. The gradients contributed by the $i$-th layer are calculated via $g_i = \frac{\partial L_{TI}}{\partial e}$ where the backpropagation of the gradients from the loss is done only through the $i$-th cross-attention layer. Figure 5 depicts the expected dot product between normalized gradients $\mathbb{E}\langle \frac{g_i}{||g_i||}, \frac{g_j}{||g_j||} \rangle$ of every two cross attention layers $i, j$, averaged over different noises and images. It can be seen that gradients propagated from different cross-attention layers have lower correlation compared to gradients propagated from the same layers. Moreover, different layers may produce gradients that oppose to each other. This observation stimulated us to optimize a distinct embedding for each layer, enabling the utilization of the varied contributions of the different layers to the synthesis process. We next explain how we extend the Extended Textual Inversion (XTI) is performed. First, we add $n$ new textual tokens $t_1, \ldots, t_n$ to the tokenizer model, associated with $n$ new tokens embeddings lookup-table elements $e_1, \ldots, e_n$. Then, similarly to Gal et al. (2022), we optimize the token embeddings with the objective to predict the noise of noisy images from $\mathcal{I}$, while the token embeddings are injected into the network, one token per layer. In practice, we employ a collection of placeholder sentences denoted by $\Pi = \{P_1, \ldots, P_m\}$, each containing a special placeholder symbol "{ }" to represent the location where the tokens $t_1, \ldots, t_n$ is inserted (e.g. "A photograph of { }"). We denote by $P_i(t_1, \ldots, t_n)$ the set of $n$ sentences, where the special symbol "{ }" is substituted with the tokens $t_1, \ldots, t_n$. Assuming that the denoising U-net is parameterized by a set of parameters denoted by $\theta$, and operates within the extended conditioning space as previously described, we define the reconstruction objective for the embeddings $e_1, \ldots, e_n$ that correspond to the tokens $t_1, \ldots, t_n$ as follows: $$L_{XTI} = \mathbb{E}_{P \sim \Pi, I \sim \mathcal{I}, \varepsilon \sim \mathcal{N}(0, 1)} \|\varepsilon - \varepsilon_\theta(I_t | t, P(t_1, \ldots, t_n))\|_2^2$$ where $I_t$ is the image $I$ noised with the additive noise $\varepsilon$ according to the noise level $t$, and $\varepsilon_\theta$ is the noise predicted by the model. Once we operate with a latent diffusion model, we always suppose that $I$ is a latent image representation. The new look-up table embeddings $e_1, \ldots, e_n$ that correspond to $t_1, \ldots, t_n$ are optimized w.r.t. $L_{XTI}$. 4 EXPERIMENTS AND EVALUATION In this section we present a comprehensive evaluation of our proposed XTI approach for the personalization task, encompassing quantitative, qualitative, and user study analysis. For more details about the user study setting please refer to the supplementary material. In supplementary Section A.1 we conduct an in-depth analysis of the various properties exhibited by the U-net cross-attention layers, and investigate how these characteristics are distributed across the layers. This analysis provides motivation for the effectiveness of our $P^+$ space. In all of our experiments we use the Stable Diffusion 1.4 model [Rombach et al., 2022], a latent diffusion model whose denoising U-net operates on an autoencoded image latent space. It is built on top of CLIP [Radford et al., 2021], whose token embedding is represented by a vector with 768 entries, such that $P \subseteq \mathbb{R}^{768}$. The U-net has four spatial resolution levels - 8x8, 16x16, 32x32, and 64x64. The 16, 32, and 64 resolution levels each have two cross-attention layers on the downward (contracting) path and three cross-attention layers on the upward (expansive) path. Resolution 8 has only 1 cross-attention layer. Thus there are a total of 16 cross-attention layers and 16 conditional token embeddings that comprise our $P^+ \subseteq \mathbb{R}^{768 \times 16}$ space. 4.1 XTI EVALUATION We evaluate our proposed XTI and compare our results to the original Textual Inversion (TI) [Gal et al., 2022]. We use a combined dataset of the TI dataset of 9 concepts, and the dataset from [Kumari et al., 2023] with 6 concepts. For both datasets, each concept has 4-6 original images. We focus on TI as a baseline because it is a model-preserving inversion approach that does not fine-tune the model weights. These fine-tuning approaches like DreamBooth [Ruiz et al., 2022] and Custom Diffusion [Kumari et al., 2023] explicitly embed the concept within the model’s output domain and thus have excellent reconstruction. However, they have several disadvantages. Firstly, they risk destroying the model’s existing prior (catastrophic forgetting). Secondly, they have several orders of magnitude more parameters. Recent work with Low-Rank Adaptation (LoRA) [Hu et al., 2021; Ryu, 2022] reduces the number of fine-tuned parameters to a fraction, but this is still about $\sim 100x$ more than XTI. Lastly, they are difficult to scale to multiple concepts since the fine-tuned parameters for each concept have to be merged. Nevertheless, we show DreamBooth as an alternative baseline for quantitative metrics. We followed the batch size of 8 and performed 5000 optimization steps for Textual Inversion, consistent with the original paper. However, we opted to use a reduced learning rate of 0.005 without scaling for optimization, as opposed to the Latent Diffusion Model from [Rombach et al., 2022] used in the original paper. In our experiments, Stable Diffusion with this learning rate worked better. For our proposed XTI, we used the same hyperparameters as for Textual Inversion, except for the number of optimization steps which we reduced to 500, resulting in significantly faster optimization time. Both Textual Inversion and XTI shared all other hyperparameters. On 2×Nvidia A100 GPUs, the whole optimization takes $\sim 15$ minutes for XTI compared to $\sim 80$ minutes for TI. 4.1.1 Quantitative Evaluation Following Gal et al. (2022), to evaluate the editability quality of the inversions, we use the average cosine similarity between CLIP embeddings of the generated images and the prompts used to generate the images (Text Similarity). To measure the distortion of the generated images from the original concept (Subject Similarity), we use the average pairwise cosine similarity between ViT-S/16 DINO Caron et al. (2021) embeddings of the generated images and the original dataset images. Compared to CLIP which is trained with supervised class labels, Ruiz et al. (2022) argued that DINO embeddings better capture differences between images of the same class due to its self-supervised training. All the methods reported in Figure 6 are evaluated over 15 subjects from Gal et al. (2022) and Kumari et al. (2023), each generated with 14 different prompts templates that place the concept in a novel context (e.g., "A photograph of {} in the jungles"; see Section A.7.3 in the supplementary for details). For each test concept and prompt we generated 32 images, making a total of $15 \times 14 \times 32 = 6720$ images. We fix the generation seed across different methods. In Figure 6, we report the evaluation of the proposed Extended Textual Inversion (XTI). Among Textual Inversion Gal et al. (2022), as for comparison we also include DreamBooth Ruiz et al. (2022) which is not a model-preserving method. Notably, XTI outperforms TI at both subject and text similarity despite using 10x fewer training steps. We also report TI using 500 optimization steps, which is the number of steps we use for XTI. This improves the Text Similarity because fewer optimization steps prevent the optimized token embedding from being out of distribution. However, it degrades reconstruction as measured by Subject Similarity. We also report the subject inversion in a data-hungry setup, where it is represented with only a single image. Notably, even in this extreme setting, the proposed XTI performs better than multi-image TI in terms of subject similarity. As for single image training for all the runs we reduce the learning rate to 0.001 to better prevent overfitting. Figure 19 in supplementary provides a visual comparison of TI and XTI inversions in the single image setting. We omit single-image DreamBooth results from Figure 6 and 19 due to its comparatively poor performance, namely Text Similarity of 0.25 and Subject Similarity of 0.40. In particular, we found DreamBooth in this single-image setting to be prone to overfitting and difficult to optimize. 4.1.2 Human Evaluation Figure 7 shows a visual comparison of our XTI approach with the original TI. Our method demonstrates less distortion to the original concept and to the target prompt. To assess the efficacy of our proposed method from a human perspective, we conducted a user study. The study, summarized in Table 1, asked participants to evaluate both Textual Inversion (TI) and Extended Textual Inversion (XTI) based on their fidelity to the original subject and the given prompt. The results show a clear preference for XTI for both subject and text fidelity. Further qualitative results are presented in supplementary, Section A.2. Table 1: User study preferences for subject and text fidelity for TI and XTI. See supplementary material for more details. | Method | Subject Fidelity | Text Fidelity | |-----------------|------------------|---------------| | Textual Inversion | 24% | 27% | | XTI (Ours) | **76%** | **73%** | Figure 7: **Textual Inversion (TI) vs. Extended Textual Inversion (XTI).** Column 1: Original concepts. Column 2: TI results. Column 3: XTI results. It can be seen that XTI exhibits superior subject and prompt fidelity, as corroborated by the results of our user study. ### 4.2 Embedding Density As the textual embeddings inverted with XTI have better editability properties compared to the original TI, this suggests that these tokens are better aligned with the original tokenizer look-up table embeddings, which represents the manifold of natural language embeddings. To quantify this intuition, we evaluate the density of the newly-optimized tokens with respect to the original “natural” tokens look-up table embeddings. We perform kernel-based density estimation (KDE) in the look-up table tokens embeddings space. Let us define $\mathcal{E}$ to be the set of all original tokens look-up table embeddings, before adding the extra optimized token(s). Assuming that $\mathcal{E}$ is sampled from some continuous distribution, one can define the approximation of its density function at a point $x$ as: $$\log p_{\mathcal{E}}(x) \approx \frac{1}{|\mathcal{E}|} \sum_{e \in \mathcal{E}} K(x - e), \quad (2)$$ where $K$ is the Gaussian kernel density function [Parzen (1962); Rosenblatt (1956)]. As for the embeddings optimized with the original TI, this quantity always appears to be significantly smaller compared to the densities at the original embeddings $\mathcal{E}$. Figure 8 illustrates the original tokens density distribution, and the textual inversion tokens densities. This demonstrates that XTI provides embeddings that are closer to the original distribution, enabling more natural reconstruction and better editability. ### 5 Style Mixing Application As we showed earlier, different layers of the denoising U-net are responsible for different aspects of a synthesized image. This allows us to combine the *shape* of one inverted concept with the *appearance* of another inverted concept. We call this Style Mixing. Let us consider two independent XTI inversions of two different concepts. We can combine the inversions by passing tokens from different subjects to different layers, as illustrated in Figure 2. This mixed conditioning produces an image with a coarse geometry from the first concept and an appearance from the second concept. Formally, we are given two extended prompts: \( \{p_1, \ldots, p_n\} \) and \( \{q_1, \ldots, q_n\} \). We form a new extended prompt \( \{p_1, \ldots, p_k, q_{k+1}, \ldots, q_K, p_{K+1}, \ldots, p_n\} \) with the separators \( 1 \leq k < K \leq n \). Our observations indicate that the optimization of XTI with an additional density regularization loss term indicated in [2] enhances its ability to mix objects and styles, without compromising the quality of the inversion output. More details are provided in the supplementary material. Figure 9 (right) demonstrates the combination of the "skull mug" and "cat statue" concepts from Gal et al. (2022). Different rows of the plot correspond to different blending ranges \( k, K \). From top to bottom, we gradually expand it from the middle coarse layer to all the cross-attention layers. This range \((k, K)\) gives the control over the amount of details we want to bring from one inversion to another. Figure 9 (left) shows a variety of examples generated with this method. Both shape and appearance are inherited remarkably well. For more examples and qualitative and quantitative comparisons to baselines, we refer to supplementary. ![Appearance transfer with XTI blending](image) **Figure 9:** Style Mixing in \( P^+ \). *Left:* rows generated by varying the degree of mixing by adjusting the proportion of layers conditioned on either of the two \( P^+ \) inversions. *Right:* more style mixing examples. Top row shows shape source concepts, first column shows appearance source concepts. ### 6 CONCLUSIONS, LIMITATIONS, AND FUTURE WORK We have presented, \( P^+ \), an extended conditional space, which provides increased expressivity and control. We have analyzed this space and showed that the denoising U-net demonstrates per-layer specification, where different layers exhibit different sensitivity to shape or appearance attributes. The competence of \( P^+ \) is demonstrated in the Textual Inversion problem. Our Extended Textual Inversion (XTI) is shown to be more accurate, more expressive, more controllable, and significantly faster. Yet surprisingly, we have not observed any reduction in editability. The performance of XTI, although impressive, is not flawless. Firstly, it does not perfectly reconstruct the concept in the image, and in that respect, it is still inferior to the reconstruction that can be achieved by fine-tuning the model. Secondly, although XTI is significantly faster than TI, it is a rather slow process. Lastly, the disentanglement among the layers of U-net is not perfect, limiting the degree of control that can be achieved through prompt mixing. An interesting research avenue is to develop encoders to invert one or a few images into \( P^+ \), possibly in the spirit of Gal et al. (2023), or to study the impact of applying fine-tuning in conjunction with operating in \( P^+ \). REFERENCES Rameen Abdal, Yipeng Qin, and Peter Wonka. Image2stylegan: How to embed images into the stylegan latent space? In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4432–4441, 2019. Rameen Abdal, Yipeng Qin, and Peter Wonka. Image2stylegan++: How to edit the embedded images? In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8296–8305, 2020. Yuval Alaluf, Omer Tov, Ron Mokady, Rinon Gal, and Amit H. Bermano. Hyperstyle: Stylegan inversion with hypernetworks for real image editing, 2021. Omri Avrahami, Ohad Fried, and Dani Lischinski. Blended latent diffusion. arXiv preprint arXiv:2206.02779, 2022a. Omri Avrahami, Dani Lischinski, and Ohad Fried. Blended diffusion for text-driven editing of natural images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18208–18218, 2022b. Omer Bar-Tal, Dolev Ofri-Amar, Rafail Fridman, Yoni Kasten, and Tali Dekel. Text2live: Text-driven layered image and video editing. arXiv preprint arXiv:2204.02491, 2022. David Bau, Jun-Yan Zhu, Hendrik Strobelt, Agata Lapedriza, Bolei Zhou, and Antonio Torralba. Understanding the role of individual units in a deep neural network. Proceedings of the National Academy of Sciences, 2020. ISSN 0027-8424. doi: 10.1073/pnas.1907375117. URL https://www.pnas.org/content/early/2020/08/31/1907375117. Amit H Bermano, Rinon Gal, Yuval Alaluf, Ron Mokady, Yotam Nitzan, Omer Tov, Oren Patashnik, and Daniel Cohen-Or. State-of-the-art in the architecture, methods and applications of stylegan. In Computer Graphics Forum, volume 41, pp. 591–611. Wiley Online Library, 2022. Tim Brooks, Aleksander Holynski, and Alexei A. Efros. Instructpix2pix: Learning to follow image editing instructions. In CVPR, 2023. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. CoRR, abs/2104.14294, 2021. URL https://arxiv.org/abs/2104.14294. Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T Freeman, Michael Rubinstein, et al. Muse: Text-to-image generation via masked generative transformers. arXiv preprint arXiv:2301.00704, 2023. Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord. Diffedit: Diffusion-based semantic image editing with mask guidance. arXiv preprint arXiv:2210.11427, 2022. Antonia Creswell and Anil Anthony Bharath. Inverting the generator of a generative adversarial network. IEEE transactions on neural networks and learning systems, 30(7):1967–1974, 2018. Katherine Crowson, Stella Biderman, Daniel Kornis, Dashiel Stander, Eric Hallahan, Louis Casstricato, and Edward Raff. Vqgan-clip: Open domain image generation and editing with natural language guidance. arXiv preprint arXiv:2204.08583, 2022. Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. An image is worth one word: Personalizing text-to-image generation using textual inversion. arXiv preprint arXiv:2208.01618, 2022. Rinon Gal, Moab Arar, Yuval Atzmon, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. Designing an encoder for fast personalization of text-to-image models. arXiv preprint arXiv:2302.12228, 2023. Amin Ghiasi, Hamid Kazemi, Eitan Borgnia, Steven Reich, Manli Shu, Micah Goldblum, Andrew Gordon Wilson, and Tom Goldstein. What do vision transformers learn? a visual exploration. arXiv preprint arXiv:2212.06727, 2022.
m7aPLHwsLr
The low success rates of attacks (especially GAMMA) might be due to a wrong initialisation. In the appendix, it is written that 200 as population size and query are used, but the number of queries for the GAMMA attack are computed as population_size * iterations. Also, the number of used sections is missing (which is a crucial point for the attack).
DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified Robustness Shoumik Saha, Wenxiao Wang, Yigitcan Kaya, Soheil Feizi & Tudor Dumitras {smksaha, wwx, cankaya, sfeizi, tudor}@umd.edu Department of Computer Science University of Maryland - College Park Abstract Machine Learning (ML) models have been utilized for malware detection for over two decades. Consequently, this ignited an ongoing arms race between malware authors and antivirus systems, compelling researchers to propose defenses for malware-detection models against evasion attacks. However, most if not all existing defenses against evasion attacks suffer from sizable performance degradation and/or can defend against only specific attacks, which makes them less practical in real-world settings. In this work, we develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection. Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables. After showing how DRSM is theoretically robust against attacks with contiguous adversarial bytes, we verify its performance and certified robustness experimentally, where we observe only marginal accuracy drops as the cost of robustness. To our knowledge, we are the first to offer certified robustness in the realm of static detection of malware executables. More surprisingly, through evaluating DRSM against 9 empirical attacks of different types, we observe that the proposed defense is empirically robust to some extent against a diverse set of attacks, some of which even fall out of the scope of its original threat model. In addition, we collected 15.5K recent benign raw executables from diverse sources, which will be made public as a dataset called PACE (Publicly Accessible Collection(s) of Executables) to alleviate the scarcity of publicly available benign datasets for studying malware detection and provide future research with more representative data of the time. Our code and dataset are available at https://github.com/ShoumikSaha/DRSM. 1 Introduction Machine learning (ML) has started to see more and more adoption in static malware detection, as it also has in many other mission-critical applications. Traditionally, ML models that use static features [Anderson & Roth, 2018] require a feature engineering step due to the large size and complex nature of programs. More recently, however, researchers have proposed models like MalConv [Raff et al., 2018] that can consume whole program simply as raw binary executable to eliminate this step. As expected, there has been a rise in studies showing the adversarial vulnerability of these models in the last few years [Kreuk et al., 2018, Lucas et al., 2021], resulting in an ongoing arms race. Currently, existing defenses, such as non-negative or monotonic classifier [Fleshman et al., 2018, Incer Romeo et al., 2018] and adversarial training [Lucas et al., 2023], not only introduce sizable drops in standard accuracy but also provide robustness only to specific attacks while still being vulnerable to the rest. While certified robustness has been studied by many [Cohen et al., 2019, Lecuyer et al., 2019, Salman et al., 2019, Levine & Feizi, 2020a,b], it remains under-explored in the context of malware detection. To fill this gap, we redesign the de-randomized smoothing scheme, a certified defense originally developed for images [Levine & Feizi, 2020a], to detect malware from raw bytes. With MalConv [Raff et al., 2018] as the base classifier, we use DRSM (De-Randomized Smoothed Mal- Figure 1: Overview of a prototypical adversarial attack on MalConv and DRSM model. MalConv misclassifies the adversarial malware file as ‘benign’. Our DSRM creates ablated sequences of the file and makes predictions on each, among which, the majority (winning) class is still ‘malware’. Conv) to denote the resulting defense. To our knowledge, DRSM is the first defense offering certified robustness for malware executable detection. It is challenging for malware domain to utilize de-randomized smoothing scheme due to the inherent difference between image and raw byte file structure. As a solution, we propose a window ablation scheme that generates a set of ablated sequences by dividing the input sequence into non-overlapping windows. For each of these ablated sequences, we train a base classifier keeping the ground truth from original input. At inference, DRSM take the majority of predictions from these base classifiers as its final prediction. Figure 1 shows a simplified toy example: An adversarial attack may successfully evaded MalConv model with the presented small changes to the raw executables, but it would still be detected by DRSM if the perturbation could not manipulate sufficient votes. We find that our DRSM (98.18%) can achieve comparable standard accuracy to MalConv (98.61%), and outperforms a prior defense MalConv(NonNeg) (88.36%) by a large margin. Besides our theoretical formulation for DRSM’s certified robustness, we show that it can provide up to 53.97% certified accuracy depending on the attacker’s capability. We discuss the performance-robustness trade-offs, and its adaptability upon demand. Moreover, we evaluate the empirical robustness of our DRSM model against 9 different attacks in both white and black box settings, including attacks outside of the intended threat model of De-Randomized Smoothing. Depending on the attack, even the least robust DRSM model can provide 87.9%~26.5% better robustness than MalConv. A practical difficulty in malware research is collecting benign raw executables, due to copyrights and legal restrictions (Anderson & Roth, 2018). Throughout this work, we collect 15.5K fairly recent and diverse benign executables from different sources, which can be a better representative of the real world. These will be made public as a new dataset, namely PACE (Publicly Accessible Collection(s) of Executables), to alleviate the accessibility issue of benign executables and facilitate future research. Our major contributions include: (1) A new defense, DRSM (De-Randomized Smoothed MalConv), that pioneers certified robustness in the executable malware domain (Section 5). (2) A thorough evaluation of DRSM regarding its performance and certified robustness, which suggests DRSM offers certified robustness with only mild performance degradation (Section 6). (3) A thorough evaluation of DRSM regarding its empirical robustness against 9 empirical attacks covering different settings and types, which suggests DRSM is empirically robust to some extent against diverse attacks. (Section 7). (4) A collection of 15.5K benign binary executables from different sources, which will be made public as a part of our new dataset PACE. (Section 4). 2 RELATED WORK ML in Static Malware Detection. There have been several studies of how malware executables can be classified using ML. As early as 2001 Schultz et al. (2001) proposed a data mining technique for malware detection using three different types of static features. Pioneered by Nataraj et al. (2011), CNN-based techniques for malware detection became popular among security researchers, e.g., Kalash et al. (2018); Yan et al. (2018). Eventually, Raff et al. (2018) proposed a static classifier, named MalConv, taking raw byte sequences to detect malware using a convolutional neural network. We will use it as the base classifier in this work. It is still considered a state-of-the-art for detection from raw byte inputs, and its popularity led to a follow up model MalConv 2 (Raff et al., 2021). **Adversarial Attacks and Defenses in Malware Detection.** Along with the detection research, there has been plenty of research on adversarial attacks on these models. These attacks fall into different categories. For example, attacks proposed by Kolosnjaji et al. (2018); Kreuk et al. (2018); Suciu et al. (2019) appended and/or injected adversarial bytes in the malware computed by gradient. Demetrio et al. (2019, 2021b); Nisi et al. (2021) motivated attacks that modify or extend DOS and Header fields; Demetrio et al. (2021a) extracted payloads from benign files to be appended and injected into malware files. Recent work by Lucas et al. (2021) used two types of code transformation to generate adversarial samples. For defenses, Fleshman et al. (2018) proposed a defense, MalConv (NonNeg), by constraining weights in the last layer of MalConv to be non-negative. However, this model achieves low accuracy of 88.36%, and has been shown to be as vulnerable as MalConv in some cases (Wang et al., 2023, 2022; Ceschin et al., 2019). Another defense strategy, adversarial training cannot guarantee defense against attacks other than the one used during training, which limits its usage: Lucas et al. (2023) showed training it on Kreuk-0.01 degraded the true positive rates to 84.4% $\sim$ 90.1%. Notably, where variants of randomized smoothing schemes have been proposed for vision domains (Cohen et al., 2019; Lecuyer et al., 2019; Salman et al., 2019; Levine & Feizi, 2020a,b) (more details in the Appendix A.2), they remain under-explored in the context of malware detection. Although there is a concurrent work (Huang et al., 2023) that proposes certified robustness in malware domain, they differ from us in terms of the employed smoothing scheme and threat model. **Limited Accessibility to Benign Executables.** Though there has been a large amount of work on malware detection, most of the work was done using private or enterprise dataset with restrictive access. Prior works (Anderson & Roth, 2018; Yang et al., 2021a; Downing et al., 2021) explain the copyright issue and only published the feature vector of benign files (see Table S5). This impose many constraints to the advancement of malware detection techniques, especially to have a complete model that requires raw executables as inputs. ### 3 BACKGROUND AND NOTATIONS We denote the set of all bytes of a file as $X \in \{0, 1, 2, ..., N - 1\}$, where $N = 256$. A binary file is a sequence of $k$ bytes $x = (x_1, x_2, x_3, ..., x_k)$, where $x_i \in X$ for all $1 \leq i \leq k$. Note that the length $k$ varies for different files, thus $k$ is not fixed. However, the input vector fed into the network has to be of a fixed dimension. So, the common approach is to – pad zeros at the end of $x$ if $k < D$, or extract the first $D$ bytes from $x$, to fix the input length to $D$. #### 3.1 Base Classifier In this work, we will be using the state-of-the-art static malware detection model to this date, named MalConv (Raff et al., 2018), as our base classifier. While there are other models like Ember, GBDT (Anderson & Roth, 2018) for malware detection, note that – these models work on a specified feature format that needs an extra feature extraction step, whereas our model can directly take the raw binary executables. Let us represent the MalConv model (see Figure 7) as $F_\theta : X \rightarrow [0, 1]$ with a set of parameters $\theta$ that it learns through training. If the output of $F_\theta(X)$ is greater than 0.5 then the prediction is considered as 1 or malicious, and vice versa. We set the input length as 2MB following the original paper. MalConv takes in each byte $x_i$ from file $X$ and then passes it to an embedding layer with an embedding matrix $Z \in \mathbb{R}^{D \times 8}$, which generates an embedding vector $z_i = \phi(x_i)$ of 8 elements. This vector is then passed through two convolution layers, using ReLU and sigmoid activation functions. These activation outputs are combined through a gating which performs an element-wise multiplication to mitigate the vanishing gradient problem. The output is then fed into a temporal max pooling layer, followed by a fully connected layer. Finally, a softmax layer calculates the probability. #### 3.2 Threat Model We assume that the attacker has the full knowledge of the base-classifier, including architectures and model parameters. This is typically referred to as the white-box setting. The white-box setting considers potentially strong attackers, which is desired when assessing defenses. In the primary threat model that we consider when developing our defense, the attacker can modify any existing bytes or add (append or insert) extra bytes bounded in a contiguous portion of the input sample in test time to evade the model. So, the goal of the attacker is to generate an aforementioned perturbation $\delta$ that can be applied on malware $x$ to generate an adversarial malware $x'$, for which $F_\theta(x') < 0.5$, i.e., the classifier predicts it as a benign file. Here, the attacker knows the classifier model $F$ and its parameters $\theta$, and can modify the original malware file $x$. However, finding the perturbation $\delta$ in a binary file is more challenging than vision due to its inherited file structures. For any arbitrary change in a malware file, the file can lose its semantics, i.e. malicious functionality, in the worst case, the file can get corrupted. Even after such challenges in binary modification, prior attacks have been successful by adding contiguous adversarial bytes at the end [Kreuk et al., 2018] or other locations [Suciu et al., 2019; Demetrio et al., 2021a,b], or modifying bytes at specific locations [Demetrio et al., 2019; Nisi et al., 2021], to evade a model. Though the attacks that are bounded in one contiguous portion falls within our primary threat model, for empirical robustness evaluation, we include attacks that can have impacts at multiple different parts in the file. In addition, we also consider recent, more sophisticated attacks [Lucas et al., 2021, 2023] where the attacker has the power to disassemble malware and apply different code transformations at any place in the file. For coherence, we defer the details about these attacks to Section 7 where we evaluate the empirical robustness of our defenses against them. 4 A NEW PUBLICLY AVAILABLE DATASET—PACE Like other domains, malware detection suffers from concept drift too. Previously, Yang et al. (2021b); Jordaney et al. (2017); Barbero et al. (2022) demonstrated how concept drift can have a disastrous impact on ML-based malware detection. Therefore, we used 3 datasets from different times in this work (Table 1). However, in the malware domain, having a large dataset to train a machine learning (ML) model may not be enough as maintaining diversity and recency is also crucial [Cao et al., 2020; Downing et al., 2021]. We found that models trained without diverse benign samples can have a very high false positive rate (see details in Appendix A.1.3). Despite the importance of diverse benign samples, unfortunately, most prior works [Anderson & Roth (2018); Downing et al. (2021)] could not publish raw executables of benign files due to copyright and legal restrictions. For this work, we crawled popular free websites, e.g., SourceForge, CNET, Net, Softonic, etc., to collect a diverse benign dataset of size 15.5K (Table 2), naming PACE (Publicly Accessible Collection(s) of Executables). We collected the malware from VirusShare at the same time (August 2022) as benign files. Following the common practice and guidelines, we are publishing the URLs along with the MD5 hash for each raw benign file in our dataset (see Appendix A.1 for more details). We hope this will help researchers to recreate the dataset easily and experiment with a better representative of real-world settings in the future.\footnote{PACE malware samples will also be provided upon request.} | Dataset Name | Collection Time | Number of Binaries | Public Availability | |--------------|-----------------|--------------------|---------------------| | Ember | 2017 | 400K | X | | VTFeed | 2020 | 139K | X | | PACE (Our) | 2022 | 15.5K | ✓ | | Source | Number of Binaries | |--------------|--------------------| | SourceForge | 7,865 | | CNET | 3,661 | | Net | 2,534 | | Softonic | 1,152 | | DikeDataset | 1,082 | | Netwindows | 185 | | Manually Obtained from Windows OS | 89 | | Total | 15,568 | We used a MalConv model pre-trained on Ember [Anderson & Roth, 2018] dataset provided by the Endgame Inc. Then we used this model to re-train the MalConv, MalConv (NonNeg), and our DRSM models on both VTFeed and PACE (our) dataset.\footnote{The authors of Lucas et al., 2021 assisted in training models on VTFeed, which we could not have done by ourselves since VTFeed is not publicly accessible.} We split our dataset into 70:15:15 ratios for train, validation, and test sets, respectively. During evaluation, we made sure that test samples came from the latest dataset (PACE) only. For model implementation details, see Appendix A.3. 5 DRSM: De-Randomized Smoothing on Malware Classifier Since the malware detection problem cannot be directly mapped to typical vision problems, we had to redesign the ‘de-randomized smoothing’ scheme to make it compatible. Unlike images, our input samples are one-dimensional sequences of bytes, which makes the common vision-oriented ablation techniques, e.g., adding noise, masking pixels, block ablations, etc., infeasible. Additionally, even a random byte change in a file may cause a behavior change or prevent the sample from executing. Figure 2: DRSM (De-Randomized Smoothed MalConv) model framework. Here, the red small block in ‘Window Ablation’ represents the perturbation by attacker, and hence, the base classifier gives a wrong prediction for that (shown with red cross). So, we introduce the ‘window ablation’ strategy which involves segmenting the input sample into multiple contiguous sequences of equal size. If the input length of the base classifier is $L$, and the size of the ablated window is $w$, then there will be $\lceil \frac{L}{w} \rceil$ ablated sequences of length $w$ resulting in the ablated sequence set $S(x)$. So, even if an attacker generates a byte perturbation of size $p$, it can modify at most $\Delta = \lceil \frac{p}{w} \rceil + 1$ ablated sequences (+1 when a perturbation overlaps 2 windows). Since a perturbation can only influence a limited number of ablated sequences, it cannot directly change the decision of the smoothed-classifier model – which was our prior motivation to integrate this technique. A visual representation of our strategy is provided in Figure 2. The goal of the defender is to – using $F_\theta$ as the base classifier, find a de-randomized smoothed model $G_\theta$ that can detect any adversarial malware $x'$ generated using a perturbation $\delta$. $G_\theta$ takes in each sequence $s$ from the ablated sequence set $S(x)$, and returns the most frequent predicted class. Specifically, for an input file $x$, ablated sequence set $S(x)$, and base classifier $F_\theta$, the de-randomized smoothed model $G_\theta$ can be defined as: $$G_\theta(x) = \arg\max_c n_c(x)$$ where, $$n_c(x) = \sum_{x' \in S(x)} I\{F_\theta(x') = c\}$$ denotes the number of ablated sequences that were predicted as class $c$. The percentage of files that are correctly classified by the de-randomized smoothed model $G_\theta$ is the ‘standard accuracy’. We say the classifier $G_\theta$ certifiably robust on an ablated sequence set if the number of predictions for the correct class exceeds the incorrect one by a ‘large margin’ (dictated by byte size of perturbation). This ‘large margin’ puts a lower bound on attacker’s success in altering predictions of the classifier $G_\theta$ since a perturbation $\delta$ of size $p$ can, at most, impact $\Delta = \lceil \frac{p}{w} \rceil + 1$ ablated sequences. Mathematically, the de-randomized smoothed model $G_\theta$ is ‘certifiably robust’ on input $x$ for predicting class $c$ if: $$n_c(x) > \max_{c' \neq c} n_{c'}(x) + 2\Delta$$ Since our problem is a binary classification problem, this can be rewritten as: $$n_m(x) > n_b(x) + 2\Delta \quad ; \text{if true-label}(x) = \text{malware}$$ $$n_b(x) > n_m(x) + 2\Delta \quad ; \text{if true-label}(x) = \text{benign}$$ (1) where, \( n_m(x) \) and \( n_b(x) \) are the number of ablated sequences predicted as malware and benign by the de-randomized smoothed model \( G_\theta \), respectively. The percentage of file that holds the inequality \( I \) for \( G_\theta \) is the ‘certified accuracy’. For simplicity, we will use DRSM-n to denote DRSM with the number of ablated sequences \( |S(x)| = n \), e.g. DRSM-4 means 4 ablated sequences on input \( x \) will be generated for DRSM. 6 Certified Robustness Evaluation 6.1 Standard Accuracy For evaluation, we compare our DRSM models with MalConv (Raff et al., 2018) which is still one of the state-of-the-art models for static malware detection. Moreover, we consider the non-negative weight constraint variant of MalConv which was proposed as a defense against adversarial attack in prior work (Fleshman et al., 2018). We train and evaluate these models on the same train and test set (Section 4). Table 3: Standard and Certified Accuracy of Models. MalConv and MalConv(NonNeg) cannot provide certified accuracy | Model | Standard Accuracy (in %) ↑ | Certified Accuracy [Δ = 2] (in %) ↑ | |----------------|----------------------------|-------------------------------------| | | Train-set | Validation-set | Test-set | | MalConv | 99.73 | 98.87 | 98.61 | | MalConv(NonNeg)| 88.56 | 87.56 | 88.36 | | DRSM-4 | 99.49 | 98.12 | 98.18 | | DRSM-8 | 99.67 | 97.88 | 97.79 | | DRSM-12 | 96.07 | 95.58 | 95.88 | | DRSM-16 | 94.29 | 93.00 | 93.3 | | DRSM-20 | 91.17 | 91.05 | 91.15 | | DRSM-24 | 90.22 | 89.80 | 90.24 | For DRSM-n, we choose \( n = \{4, 8, 12, 16, 20, 24\} \) for our experiments and show the standard accuracy on the left side of the Table 3. Recall that – for DRSM-n, a file is correctly classified if the winning class from majority voting matches the true label for that file (Section 5). For ties, we consider ‘malware’ as the winning class. From the Table 3, we can see that – DRSM-4 (98.18%) and DRSM-8 (97.79%) can achieve comparable accuracy to the MalConv model (98.61%). However, increasing the \( n \) has a negative impact on the standard accuracy. For example, DSRM-20 and DSRM-24 achieve 91.15% and 90.24% standard accuracy, respectively. We investigate and find that – with more ablations (smaller window), the probability of one window containing enough malicious features to make a stable prediction becomes less. On the other hand, the MalConv (NonNeg) model has a lower accuracy, which is consistent with the results by Fleshman et al. (2018). 6.2 Certified Accuracy Besides standard accuracy, we also evaluate the certified accuracy for DRSM-n models. Recall that – ‘certified accuracy’ is the percentage of files for which the inequality \( I \) holds true for DRSM-n models. In short, it denotes the lower bound of model performance even when the attacker can perturb bytes in \( \Delta \) number of ablated windows and alter predictions for all of them. So, we run experiments on DRSM-n models by varying the \( \Delta \) in equation \( I \), i.e., perturbation budget for the attacker. To maintain consistency between standard and certified accuracy, we take ‘malware’ as the winning class for ties by tweaking the first inequality in \( I \) to \( n_m(x) \geq n_b(x) + 2\Delta \). Notably, \( \Delta \in \{2, 3, ..., \frac{n}{2}\} \). The range starts from 2, because any perturbation smaller than the window size can overlap with at most 2 ablated sequences, and goes up to \( \frac{n}{2} \), because the inequality \( I \) will not hold beyond this point. The right side of Table 3 shows the certified accuracy of DRSM-n models for \( \Delta = 2 \). In Figure 3, we show the result of certified accuracy on the test set for each perturbation budget for the attacker (x-axis) (Figure 10 shows in terms of \( \Delta \)). See Tables 7 and 6 in A.4 for more details. We emphasize that even with small \( \Delta = 2 (= \lceil \frac{255K}{n} \rceil + 1) \), an attacker can perturb up to 255K bytes for DRSM-8, and yet the model maintains 40.85% certified accuracy. By analyzing Table 3, we can see that \( n \) has a positive and negative correlation with certified and standard accuracy, respectively. While DRSM-24 provides the highest certified accuracy (53.97%), it has the lowest standard accuracy (90.24%) among all DRSM-n models. In contrast, DRSM-4 provides the highest standard accuracy (98.18%) with 12.2% certified accuracy. This observation may suggest a performance trade-off. It is worth highlighting that models like DRSM-8 and DRSM-16 strike a balance, delivering robust certified performance alongside commendable standard accuracy, while prior defense MalConv (NonNeg) achieves lower standard accuracy 88.36%. We also want to emphasize that – perturbing 200KB in a 2MB file (= 10%) is considered as a sizeable modification to a malware file, and yet our DRSM-n models can provide 37%~64% certified accuracy for such perturbation (from Figure 3). Remember that – this accuracy reports the theoretical lower bound and in practice, our DRSM-n models provide even higher robustness (shown in Section 7). 7 EMPIRICAL ROBUSTNESS EVALUATION Beyond theoretical robustness, we also evaluate the empirical robustness of our DRSM-n models. Recall from Section 3.2 that – in our threat model (any de-randomized smoothing scheme), attackers can add, or modify bytes in a contiguous portion of a malware file, to get it misclassified as a benign one. However, in real-life settings, attackers can be more capable and can deploy complex attacks where they can find multiple contiguous blocks to perturb. In this work, we consider 9 different attacks in both white and black box settings and categorize them into 3 types based on their alignment with our threat model. **Fully Aligned:** if an attack perturbs bytes in one contiguous block; **Partially Aligned:** if an attack perturbs bytes in multiple different contiguous blocks; **Not Aligned:** if an attack applies code transformation and changes bytes all over the file (not limited to any contiguous block). Table 4 shows the list of attacks that have been considered in this work along with their type, settings, and short description. For more details about individual attacks and their implementation, see Appendix A.5. To evaluate the attacks against MalConv, MalConv (NonNeg) and DRSM-n models, we randomly sampled 200 malware from the test-set of our dataset that are correctly classified by the model before attack. Let us call this subset of malware the ‘attack set’. We call an attack ‘successful’ if the attack can generate a functional adversarial malware that can change the model’s prediction from ‘malware’ to ‘benign’. Even though the majority voting in DRSM-n is not differentiable, it can still be attacked by targeting its base classifiers. Correspondingly, whenever necessary, we generate adversarial malware from the ‘attack set’ by differentiating through the base classifier. Attack settings (white/black-box) are determined based on the attacker’s knowledge about the base classifier. Figure 4 shows the attack success rate (ASR) for different attacks in the white-box setting. We find that – most attacks have less ASR on DRSM-n models than MalConv by a large margin. For example, FGSM append attack has 82.50% ASR on MalConv whereas 10.0% and 7.0% on DRSM-4 and DRSM-8, respectively. Moreover, for \( n \geq 16 \) in DRSM-n models, the ASR for all white-box attacks is (1%~5%). We got the highest ASRs on MalConv model for DOS Extension (98.00%) and Disp (89.50%) attack, while the ASRs on DSRM-n models were in range of (1%~72%) and (1%~42%), respectively. Table 4: Attacks evaluated. ○ - Fully Aligned, ⃝ - Partially Aligned and ● - Not Aligned describe the alignment of the attacks to our primary threat model (see Section 3.2). | Attack | Threat Model | Settings | Short Description | |-------------------------------|--------------|----------------|-----------------------------------------------------------------------------------| | FGSM Append | ○ | ✓ | Appends random bytes at the end of the file generated by FGSM | | Kreuk et al., 2018 | | | | | Slack Append | ⃝ | ✓ | Injects non-functional bytes in slack regions generated by FGSM | | Suciu et al., 2019 | | | | | DOS Extension | ⃝ | ✓ | Extends the DOS header and injects adversarial noise | | Demetrio et al., 2021b | | | | | DOS Modification (Partial) | ○ | ✓ | Puts adversarial noise in between of MZ and offset 0x3c in the DOS header | | Demetrio et al., 2019 | | | | | DOS Modification (Full) | ○ | ✓ | Modifies every byte in the DOS header without corrupting the file | | Demetrio et al., 2021b | | | | | Header Field Modification | ⃝ | ✓ | Modifies fields in PE header | | Nisi et al., 2021 | | | | | Disp | ● | ✓ | Displaces code instructions using jmp and semantic nop | | Lucas et al., 2021 | | | | | IPR | ● | ✓ | Replaces instructions in multiple ways (equiv. replace, register reassign, reorder, etc.) without altering functionalities | | Lucas et al., 2021 | | | | | GAMMA | ⃝ | ✓ | Extracts payloads from benign programs and injects them in malware | | Demetrio et al., 2021a | | | | Figure 4: Attack Success Rate (ASR) % for white-box attacks on all models Though Disp and IPR attacks fall outside of our threat model, surprisingly, DRSM-n can still provide good robustness against them (Figure 4). Here is a potential explanation: transformed bytes by Disp and IPR at different places get divided into multiple ablated sequences and thus, they become less impactful in altering multiple predictions compared to one prediction. An interesting observation is that the attacks that modify the header fields have marginally higher ASR on DRSM-8 than on... DRSM-4: Potentially, this is because for DRSM-8 the perturbed positions in header fields happen to cover more windows than other cases. Higher ASR of DOS extension attack is discussed in Appendix A.4.1. We also evaluated the models against black-box attacks using genetic optimizers. For example, GAMMA attack extracts payload from benign programs and injects them into malware by querying the model. From Figure 5, GAMMA has 24% ASR on MalConv whereas (4~1)% on DRSM-n models. While it is true that – these black-box attacks have less ASR on MalConv compared to the white-box ones, still DRSM-n models outperform. Interestingly, we found that MalConv(NonNeg) suffers in query-based black-box attacks, which is consistent with some recent works, e.g., Dropper attack by Wang et al. (2022), MPass, GAMMA attack by Wang et al. (2023), Goodware string append by Ceschin et al. (2019). 8 LIMITATIONS Though the DRSM framework can strike a good balance between standard accuracy and robustness, it has some limitations too. Because of the majority voting, its final classification is naive by nature. In a malware file, some fields or sections might have higher importance than other sections. For example, the header fields in general have higher importance in classification than the data section (Demetrio et al., 2019). But due to the majority voting, both of them might get same importance in DRSM. Another limitation is – the ‘window ablation’ scheme solely depends on the size of the file; no section information is considered. But to solve this, one will have to disassemble the file first, which would add up some computational cost. Moreover, since the padding at the end of the file does not contain any useful information, and the model randomly classify such paddings in the most cases, DRSM framework does not take them into consideration. In this work, we did not take an ‘adaptive’ attacker into the consideration, who can try to perturb every window in the file to evade DRSM. However, the attacker needs to know the size of ablations, and has to find perturb-able bytes in each window, which might be challenging but not infeasible. Since the ‘de-randomized smoothing’ is directly non-differentiable, and no state-of-the-art gradient based attack has been defined for it so far, we had to attack it through its base classifier in this work. 9 CONCLUSION In this work, we tried to find a solution for the ‘accuracy vs. robustness’ double-edged sword in the malware field. We showed that certified defense is also possible in the executable malware domain, hoping that it will open up a new paradigm of research. Besides theory, we equally emphasized the empirical robustness of our proposed DRSM. We would like to conclude by highlighting some areas and future directions our work identifies. Firstly, there is room for improving the standard accuracy of DRSM by introducing an additional classification layer, albeit at the expense of challenging the fundamental non-differentiable nature of the smoothing scheme. Secondly, recent defenses from vision, besides de-randomized smoothing, hold promise for future exploration. Malware detection is inherently an arms race and we hope our work can facilitate future research in developing more practical defenses with our defense and dataset. ACKNOWLEDGMENTS We are immensely grateful to Keane Lucas for providing the VTFeed dataset, and the private implementation of Disp, IPR attack (guided) to evaluate in this paper. This project was supported in part by a grant from an NSF CAREER AWARD 1942230, ONR YIP award N00014-22-1-2271, ARO’s Early Career Program Award 310902-00001, HR00112090132 (DARPA/RED), HR001119S0026 (DARPA/GARD), Army Grant No. W911NF2120076, the NSF award CCF2212458, NSF Award No. 2229885 (NSF Institute for Trustworthy AI in Law and Society, TRAILS), an Amazon Research Award, and an award from Capital One. This research was also partially supported by an Amazon Research Award and by the Defense Advanced Research Projects Agency (DARPA) under Agreement No. HR00112190093. Approved for public release; distribution is unlimited. REFERENCES Hyrum S Anderson and Phil Roth. Ember: an open dataset for training static pe malware machine learning models. *arXiv preprint arXiv:1804.04637*, 2018. Federico Barbero, Feargus Pendlebury, Fabio Pierazzi, and Lorenzo Cavallaro. Transcending transcend: Revisiting malware classification in the presence of concept drift. In *2022 IEEE Symposium on Security and Privacy (SP)*, pp. 805–823. IEEE, 2022. Michael Cao, Sahar Badihi, Khaled Ahmed, Peiyu Xiong, and Julia Rubin. On benign features in malware detection. In *Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering*, pp. 1234–1238, 2020. Fabrício Ceschin, Marcus Botacin, Heitor Murilo Gomes, Luiz S Oliveira, and André Grégio. Shallow security: On the creation of adversarial variants to evade machine learning-based malware detectors. In *Proceedings of the 3rd Reversing and Offensive-oriented Trends Symposium*, pp. 1–9, 2019. Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In *international conference on machine learning*, pp. 1310–1320. PMLR, 2019. Luca Demetrio, Battista Biggio, Giovanni Lagorio, Fabio Roli, and Alessandro Armando. Explaining vulnerabilities of deep learning to adversarial malware binaries. *arXiv preprint arXiv:1901.03583*, 2019. Luca Demetrio, Battista Biggio, Giovanni Lagorio, Fabio Roli, and Alessandro Armando. Functionality-preserving black-box optimization of adversarial windows malware. *IEEE Transactions on Information Forensics and Security*, 16:3469–3478, 2021a. Luca Demetrio, Scott E Coull, Battista Biggio, Giovanni Lagorio, Alessandro Armando, and Fabio Roli. Adversarial exemplers: A survey and experimental evaluation of practical attacks on machine learning for windows malware detection. *ACM Transactions on Privacy and Security (TOPS)*, 24(4):1–31, 2021b. Evan Downing, Yisroel Mirsky, Kyuhong Park, and Wenke Lee. {DeepReflect}: Discovering malicious functionality through binary reconstruction. In *30th USENIX Security Symposium (USENIX Security 21)*, pp. 3469–3486, 2021. William Fleshman, Edward Raff, Jared Sylvester, Steven Forsyth, and Mark McLean. Non-negative networks against adversarial attacks. *arXiv preprint arXiv:1806.06108*, 2018. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In Yoshua Bengio and Yann LeCun (eds.), *3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings*, 2015. URL http://arxiv.org/abs/1412.6572. Zhuoqun Huang, Neil G. Marchant, Keane Lucas, Lujo Bauer, Olga Ohrimenko, and Benjamin I. P. Rubinstein. Rs-del: Edit distance robustness certificates for sequence classifiers via randomized deletion, 2023.
dl0u4ODCuW
The reviewer might have misunderstood something, but the problem described in Section 4.1 could easily avoided if the authors used a hash table for implementing the search algorithms. (Using a hash table is a standard technique for proof-number search.) If a hash table is used, a cycle could be easily detected, so it is possible to avoid problems caused by the same molecule or reaction appearing multiple times in a path.
RETRO-FALLBACK: RETROSYNTHETIC PLANNING IN AN UNCERTAIN WORLD Austin Tripp\textsuperscript{1}*, Krzysztof Maziarz\textsuperscript{2}, Sarah Lewis\textsuperscript{2}, Marwin Segler\textsuperscript{2}, José Miguel Hernández-Lobato\textsuperscript{1} \textsuperscript{1}University of Cambridge \textsuperscript{2}Microsoft Research AI4Science \{ajt212,jmh233\}@cam.ac.uk \{krmaziar,sarahlewis,marwinsegler\}@microsoft.com ABSTRACT Retrosynthesis is the task of planning a series of chemical reactions to create a desired molecule from simpler, buyable molecules. While previous works have proposed algorithms to find optimal solutions for a range of metrics (e.g. shortest, lowest-cost), these works generally overlook the fact that we have imperfect knowledge of the space of possible reactions, meaning plans created by algorithms may not work in a laboratory. In this paper we propose a novel formulation of retrosynthesis in terms of stochastic processes to account for this uncertainty. We then propose a novel greedy algorithm called retro-fallback which maximizes the probability that at least one synthesis plan can be executed in the lab. Using \textit{in-silico} benchmarks we demonstrate that retro-fallback generally produces better sets of synthesis plans than the popular MCTS and retro* algorithms. 1 INTRODUCTION Retrosynthesis (planning the synthesis of organic molecules via a series of chemical reactions) is a common task in chemistry with a long history of automation (Vleduts [1963], Corey & Wipke [1969]). Although the combinatorially large search space of chemical reactions makes naive brute-force methods ineffective, recently significant progress has been made by developing modern machine-learning based search algorithms for retrosynthesis (Strieth-Kalthoff et al., 2020; Tu et al., 2023; Stanley & Segler, 2023). However, there remain obstacles to translating the output of retrosynthesis algorithms into real-world syntheses. One significant issue is that these algorithms have imperfect knowledge of the space of chemical reactions. Because the underlying physics of chemical reactions cannot be efficiently simulated, retrosynthesis algorithms typically rely on data-driven reaction prediction models which can “hallucinate” unrealistic or otherwise infeasible reactions (Zhong et al., 2023). This results in synthesis plans which cannot actually be executed. Although future advances in modelling may reduce the prevalence of infeasible reactions, we think it is unlikely that they will ever be eliminated entirely, as even the plans of expert chemists do not always work on the first try. One possible workaround to failing plans is to produce multiple synthesis plans instead of just a single one: the other plans can act as backup plans in case the primary plan fails. Although existing algorithms may find multiple synthesis plans, they are generally not designed to do so, and there is no reason to expect the plans found will be suitable as backup plans (e.g. they may share steps with the primary plan and thereby also share the same failure points). In this paper, we present several advancements towards retrosynthesis with backup plans. First, in section 3 we explain how uncertainty about whether a synthesis plan will work in the lab can be quantified with stochastic processes. We then propose an evaluation metric called successful synthesis probability (SSP) which quantifies the probability that at least one synthesis plan found by an algorithm will work. This naturally captures the idea of producing backup plans. Next, in section 4 we present a novel search algorithm called retro-fallback which greedily optimizes * Work done partly during internship at Microsoft Research AI4Science SSP. Finally, in section 6 we demonstrate quantitatively that retro-fallback outperforms existing algorithms on several *in-silico* benchmarks. Together, we believe these contributions form a notable advancement towards translating results from retrosynthesis algorithms into the lab. 2 BACKGROUND: STANDARD FORMULATION OF RETROSYNTHESIS Let $\mathcal{M}$ denote the space of molecules, and $\mathcal{R}$ denote the space of single-product reactions which transform a set of reactant molecules in $2^\mathcal{M}$ into a product molecule in $\mathcal{M}$. The set of reactions which produce a given molecule is given by a backward reaction model $B : \mathcal{M} \mapsto 2^\mathcal{R}$. $B$ can be used to define an (implicit) reaction graph $\mathcal{G}$ with nodes for each molecule and each reaction, and edges linking molecules to reactions which involve them. Figure 1a illustrates a small example graph. Note that by convention the arrows are drawn backwards (from products towards reactants). This kind of graph is sometimes called an AND/OR graph (see Appendix B for details). A synthesis plan for a molecule $m$ is a sequence of chemical reactions which produces $m$ as the final product. Synthesis plans usually form trees $T \subseteq \mathcal{G}$ (more generally directed acyclic subgraphs), wherein each molecule is produced by at most one reaction. The set of all synthesis plans in $\mathcal{G}$ which produce a molecule $m$ is denoted $\mathcal{P}_m(\mathcal{G})$. Figure 1b provides an example (see Appendix B.2 for a detailed definition). Not all synthesis plans are equally useful however. Most importantly, for a synthesis plan to actually be executed by a chemist the starting molecules must all be bought. Typically this is formalized as requiring all starting molecules to be contained in an inventory $\mathcal{I} \subseteq \mathcal{M}$ (although we will propose an alternative formulation in section 3). It is also desirable for synthesis plans to have low cost, fewer steps, and reactions which are easier to perform. In retrosynthesis, one usually seeks to create synthesis plans for a specific target molecule $m_*$. Typically this is formulated as a search problem over $\mathcal{G}$. Various search algorithms have been proposed which, at a high level, all behave similarly. First, they initialize an explicit subgraph $\mathcal{G}' \subseteq \mathcal{G}$ with $\mathcal{G}' \leftarrow \{m_*\}$. Nodes whose children have not been added to $\mathcal{G}'$ form the frontier $\mathcal{F}(\mathcal{G}')$. Then, at each iteration $i$ they select a frontier molecule $m_{(i)} \in \mathcal{F}(\mathcal{G}')$ (necessarily $m_*$ on the first iteration), query $B$ to find reactions which produce $m_{(i)}$, then add these reactions and their corresponding reactant molecules to the explicit graph $\mathcal{G}'$. This process is called expansion, and is illustrated for $m_c$ in Figure 1b. Search continues until a suitable synthesis plan is found or until the computational budget is exhausted. Afterwards, synthesis plans can be enumerated from $\mathcal{G}'$. The most popular retrosynthesis search algorithms compute some sort of metric of synthesis plan quality, and use a search heuristic to guide the search towards high-quality synthesis plans. For example, Monte Carlo Tree Search (MCTS) searches for synthesis plans which maximize an arbitrary scalar reward function (Segler et al., 2018). Retro* is a best-first search algorithm to find minimum-cost synthesis plans, where the cost of a synthesis plan is defined as the sum of costs for each reaction and each starting molecule (Chen et al., 2020). In both algorithms, frontier nodes are chosen using the heuristic to estimate the reward (or cost) which could be achieved upon expansion. We introduce these algorithms more extensively in Appendix E. ![Figure 1](image-url) **Figure 1:** a) graph $\mathcal{G}'$ with (backward) reactions $m_* \Rightarrow m_a + m_b$ ($r_1$), $m_* \Rightarrow m_b + m_c + m_d$ ($r_2$), and $m_a \Rightarrow m_e$ ($r_3$). Dashed box illustrates expansion of $m_c$. b) All synthesis plans in $\mathcal{P}_{m_*}(\mathcal{G}')$. 3 REFORMULATING RETROSYNTHESIS WITH UNCERTAINTY The “standard” formulation of retrosynthesis presented in section 2 requires knowledge of which reactions are possible (encoded by the backward reaction model $B$) and which molecules are purchasable (encoded by the inventory $I$). In reality, neither of these things are perfectly known. As mentioned in the introduction, predicting the outcome of chemical reactions is difficult even for experts, and machine learning models for $B$ can “hallucinate” unrealistic reactions. Perhaps surprisingly, it is also not totally clear which molecules can be bought. Things like shipping delays mean you might not always receive molecules which you order. However, many companies now advertise large “virtual libraries” with billions of molecules which they believe they can synthesize upon request, but not with 100% reliability. This section presents our first main contribution to account for this: a novel formulation of retrosynthesis which explicitly represents uncertainty. 3.1 STOCHASTIC PROCESSES FOR “FEASIBILITY” AND “BUYABILITY” There are many reasons why chemists may consider a reaction unsuccessful, ranging from having a low yield to producing the wrong product altogether. Similarly, “unsuccessfully” buying a molecule could indicate anything from a prohibitively high cost to the molecule not being delivered. In either case, for simplicity we propose to collapse this nuance into a binary outcome: reactions are either feasible or infeasible, and molecules are either buyable or not. We therefore postulate the existence of an unknown “feasibility” function $f^* : \mathcal{R} \mapsto \{0, 1\}$ and “buyability” function $b^* : \mathcal{M} \mapsto \{0, 1\}$. Uncertainty about $f^*$ and $b^*$ can be represented by stochastic processes (essentially distributions over functions). We define a feasibility model $\xi_f$ to be a binary stochastic process over $\mathcal{R}$, and define a buyability model $\xi_b$ to be a binary stochastic process over $\mathcal{M}$. This formulation is very general: $\xi_f$ and $\xi_b$ not only represent beliefs of $\mathbb{P}[f^*(r) = 1]$ and $\mathbb{P}[b^*(m) = 1]$ for all molecules $m$ and reactions $r$, but also allows correlations between feasibilities and buyabilities to be modelled. Although this formalism may seem esoteric, it is possible to re-cast almost all existing approaches to reaction prediction as stochastic processes. Any model which implicitly assigns a probability to each reaction (e.g. the softmax outputs of a neural network) can be trivially converted into a stochastic process by assuming that all outcomes are independent. Correlations can be induced via Bayesian inference over the model’s parameters (MacKay [1992]) or using a non-parametric model like a Gaussian process (Williams & Rasmussen [2006]). Importantly however, it is not at all clear how to produce realistic models $\xi_f$ and $\xi_b$. Intuitively, producing such models is at least as challenging as predicting reaction outcomes without uncertainty estimates, which is itself an active (and challenging) research area. Therefore, we will generally discuss $\xi_f/\xi_b$ in a model-agnostic way. 3.2 NEW EVALUATION METRIC: SUCCESSFUL SYNTHESIS PROBABILITY (SSP) Given $f$ and $b$, a synthesis plan $T$ is successful if all its reactions $r$ are feasible ($f(r) = 1$) and all its starting molecules $m$ are buyable ($b(m) = 1$). We formalize this with the function $$\sigma(T; f, b) = \begin{cases} 1 & f(r) = 1 \forall r \in T, b(m) = 1 \text{ and } \forall m \in F(T) \\ 0 & \text{otherwise} \end{cases}. \quad (1)$$ Finding successful synthesis plans is a natural goal of retrosynthesis. Of course, because $f$ and $b$ are unknown, we can at best search for synthesis plans with a high probability of being successful. Given a set of synthesis plans $\mathcal{T}$, we define the successful synthesis probability (SSP) as: $$\text{SSP}(\mathcal{T}; \xi_f, \xi_b) = \mathbb{P}_{f \sim \xi_f, b \sim \xi_b} [\exists T \in \mathcal{T} \text{ with } \sigma(T; f, b) = 1] \quad (2)$$ Given just a single plan $T$, $\text{SSP}(\{T\}; \xi_f, \xi_b) = \mathbb{E}_{f,b} [\sigma(T; f, b)]$ and represents the probability that $T$ is successful, which we will hereafter refer to as the success probability of $T$. When $\mathcal{T}$ contains multiple synthesis plans, then SSP quantifies the probability that any of these synthesis plans is successful. We argue that SSP is a good evaluation metric for the synthesis plans produced by retrosynthesis search algorithms. It simultaneously captures the goals of producing synthesis plans with high success probability and producing “backup” plans which could succeed if the primary synthesis plan does not. Note that by definition, SSP is non-decreasing with respect to $\mathcal{T}$, implying that an algorithm will never be penalized for producing additional synthesis plans. --- 1For example, Enamine, a popular supplier, only claims that 80% of its virtual “REAL” library can be made 3.3 Efficiently estimating SSP for all synthesis plans in \( P_{m_*}(G') \) Recall from section 2 that many retrosynthesis search algorithms do not directly output synthesis plans: they produce a search graph \( G' \) which (implicitly) contains a set of synthesis plans \( P_{m_*}(G') \). Therefore, it is natural to calculate the SSP of the entire set \( P_{m_*}(G') \). However, this set may be combinatorially large, making calculating SSP by enumerating \( P_{m_*}(G') \) intractable. Instead, we propose a method to estimate SSP using functions sampled from \( \xi_f \) and \( \xi_b \). Let \( s(n; G', f, b) : M \cup R \mapsto \{0, 1\} \) define the success of a node \( n \in G \): whether any successful synthesis plan in \( G \) contains \( n \) (we write \( s(n) \) when \( G', f, b \) are clear from context). \( s(n) \) will satisfy \[ s(n; G', f, b) \overset{(A)}{=} \sigma(T^*; f, b) \overset{(B)}{=} s(n; T^*, f, b), \quad T^* \in \arg\max_{T \in P_{m_*}(G')} \sigma(T; f, b), \] where \( P_{m_*}(G') = \bigcup_{m \in G'} P_m(G') \) is the set of all synthesis plans for all molecules in \( G' \). Equality (A) follows directly from the definition above, and equality (B) holds because \( T^* \) would still satisfy the arg max if nodes not in \( T^* \) were pruned from \( G' \). Let \( Ch_{G'}(n) \) denote the children of node \( n \). For a reaction \( r \in G' \) to succeed, it must be feasible (\( f(r) = 1 \)) and have all its reactant molecules \( m' \in Ch_{G'}(r) \) succeed. Conversely, a molecule \( m \in G' \) will succeed if it is buyable (\( b(m) = 1 \)) or if any reaction producing \( m \) succeeds. This suggests \( s(\cdot) \) will satisfy the recursive equations \[ s(m; G', f, b) = \max \left[ b(m), \max_{r \in Ch_{G'}(m)} s(r; G', f, b) \right], \] \[ s(r; G', f, b) = f(r) \prod_{m \in Ch_{G'}(r)} s(m; G', f, b). \] SSP can then be estimated by averaging \( s(m_*) \) over \( k \) i.i.d. functions sampled from \( \xi_f \) and \( \xi_b \): \[ \text{SSP}(P_{m_*}(G'); \xi_f, \xi_b) \overset{(A)}{=} \mathbb{P}_{f \sim \xi_f, b \sim \xi_b}[s(m_*; G', f, b) = 1] \approx \frac{1}{k} \sum_{i=1}^{k} s(m_*; G', f_k, b_k). \] Note that equality (A) above follows directly from equations 2 and 3. The existence of such recursive equations suggests that \( s(\cdot) \) could be efficiently computed for all nodes in \( G' \) in polynomial time using dynamic programming (we discuss this further in Appendix D.2), allowing an overall polynomial time estimate of SSP. That being said, it is still only an estimate. Unfortunately, we are able to prove that an exact calculation is generally intractable. **Theorem 3.1.** Unless \( P = NP \), there does not exist an algorithm to compute \( \text{SSP}(P_{m_*}(G'); \xi_f, \xi_b) \) for arbitrary \( \xi_f, \xi_b \) whose time complexity grows polynomially with the number of nodes in \( G' \). The proof is given in Appendix D.1. We therefore conclude that estimating SSP using equation 6 is the best realistic option given limited computational resources. 4 Retro-fallback: A greedy algorithm to maximize SSP 4.1 Ingredients for an informed, greedy search algorithm Intuitively, a greedy search algorithm would expand molecules in \( F(G') \) which are predicted to improve SSP. Given that calculating SSP exactly is intractable, calculating potential changes is likely to be intractable as well. Therefore, we will estimate SSP changes by averaging over samples from \( \xi_f \) and \( \xi_b \), and will consider how expansion might change \( s(m_*; G', f, b) \) for fixed samples \( f, b \). Specifically, we consider the effect of simultaneously expanding every frontier molecule on a fixed synthesis plan \( T \in P_{m_*}(G') \). We represent the hypothetical effect of such an expansion with a random function \( e_T : M \mapsto \{0, 1\} \), where \( e_T(m) = 1 \) implies that expanding \( m \) produces a new successful synthesis plan for \( m \). We assume the value of \( e_T \) is independently distributed for every molecule, with probabilities given by a search heuristic function \( h : M \mapsto [0, 1] \) \[ \mathbb{P}_{e_T}[e_T(m) = 1] = \begin{cases} h(m) & m \in F(G') \cap T \\ 0 & m \notin F(G') \cap T \end{cases}. \] --- 2We do not consider expanding just a single node because, for a reaction with multiple non-buyable reactant molecules in \( F(G') \), expanding just one reactant will never produce a new successful synthesis plan. The effect of this expansion on the success of $T$ is given by $\sigma' : \mathcal{P}_*(G') \mapsto \{0, 1\}$, defined as $$\sigma'(T; f, b, e_T) = \begin{cases} 1 & \forall r \in T \text{ and } (b(m) = 1 \text{ or } e_T(m) = 1) \forall m \in F(T) \\ 0 & \text{otherwise} \end{cases}. \quad (8)$$ Equation 8 for $\sigma'$ is almost identical to equation 7 for $\sigma$. The key difference (highlighted) is that $T$ can be successful if a starting molecule $m$ is not buyable ($b(m) = 0$) but has instead had $e_T(m) = 1$. Recalling that $e_T$ is a random function, we define $\tilde{\sigma}' : \mathcal{P}_*(G') \mapsto [0, 1]$ as $$\tilde{\sigma}'(T; f, b, h) = \mathbb{E}_{e_T}[\sigma'(T; f, b, e_T)], \quad (9)$$ namely the probability that a synthesis plan $T$ will be successful upon expansion $h$. A natural choice for a greedy algorithm could be to expand frontier nodes on synthesis plans $T$ with high $\tilde{\sigma}'(T; f, b, h)$. However, not all synthesis plans contain frontier nodes (e.g., plan $T_1$ in Figure 1(b)) or produce $m_*$. To select frontier nodes for expansion, we define the function $\tilde{\rho} : \mathcal{M} \cup \mathcal{R} \mapsto [0, 1]$ by $$\tilde{\rho}(n; G', f, b, h) = \max_{T \in \mathcal{P}_*(G') : n \in T} \tilde{\sigma}'(T; f, b, h), \quad n \in G'. \quad (10)$$ For $m \in F(G')$, $\tilde{\rho}(m)$ represents the highest estimated success probability of all synthesis plans for $m_*$ which also contain $m$ (conditioned on a particular $f, b$). Therefore, a greedy algorithm could sensibly expand frontier molecules $m$ with maximal $\tilde{\rho}(m)$. Unfortunately, the combinatorially large number of synthesis plans in a graph $G'$ makes evaluating $\tilde{\rho}$ potentially infeasible. To circumvent this, we assume that no synthesis plan in $G'$ uses the same molecule in two separate reactions, making all synthesis plans trees (we will revisit this assumption later). This assumption guarantees that the outcomes from different branches of a synthesis plan will always be independent. Then, to help efficiently compute $\tilde{\rho}$, we will define the function $$\tilde{\psi}(n; G', f, b, h) = \max_{T \in \mathcal{P}_*(G') : n \in T} \tilde{\sigma}'(T; f, b, h) \quad (11)$$ for every node $n \in G'$. $\tilde{\psi}$ is essentially a less constrained version of $\tilde{\rho}$. The key difference in their definitions is that $\tilde{\psi}$ maximizes over all synthesis plans containing $n$, including plans which do not produce $m_*$. The independence assumption above means that $\tilde{\psi}$ has a recursively-defined analytic solution $\psi(\cdot ; G', f, b, h) : \mathcal{M} \cup \mathcal{R} \mapsto [0, 1]$ given by the equations $$\psi(m; G', f, b, h) = \begin{cases} \max[b(m), h(m)] & m \in F(G') \\ \max[b(m), \max_{r \in Ch_{G'}(m)} \psi(r; G', f, b, h)] & m \notin F(G') \end{cases}, \quad (12)$$ $$\psi(r; G', f, b, h) = f(r) \prod_{m \in Ch_{G'}(r)} \psi(m; G', f, b, h). \quad (13)$$ Details of this solution are presented in Appendix C.1. $\tilde{\psi}(n)$ can be roughly interpreted as “the best expected success value for $n$ upon expansion.” In fact, the relationship between $\psi$ and $\tilde{\sigma}'$ is exactly analogous to the relationship between $s$ and $\sigma$ in equation 3. To compute $\tilde{\rho}$, first note that $\tilde{\rho}(m_*) = \tilde{\psi}(m_*)$, as for $m_*$ the constraints in equations 10 and 11 are equivalent. Second, because of the independence assumption above, the best synthesis plan containing both a node $n$ and its parent $n'$ can be created by taking an optimal synthesis plan for $n'$ (which may or may not contain $n$), removing the part “below” $n'$, and adding in an (unconstrained) optimal plan for $n$. Letting $Pa_{G'}(\cdot)$ denote a node’s parents, under this assumption $\tilde{\rho}$ has a recursively-defined analytic solution $\rho(\cdot ; G', f, b, h) : \mathcal{M} \cup \mathcal{R} \mapsto [0, 1]$ defined as $$\rho(m; G', f, b, h) = \begin{cases} \psi(m; G', f, b, h) & m \text{ is target molecule } m_* \\ \max_{r \in Pa_{G'}(m)} \rho(r; G', f, b, h) & \text{all other } m \end{cases}, \quad (14)$$ $$\rho(r; G', f, b, h) = \begin{cases} 0 & \psi(r; G', f, b, h) = 0 \\ \rho(m'; G', f, b, h) \frac{\psi(r; G', f, b, h)}{\psi(m'; G', f, b, h)} & \psi(r; G', f, b, h) > 0, m' \in Pa_{G'}(r) \end{cases}. \quad (15)$$ --- 3The dependence on $h$ is because it defines the distribution of $e_T$ in equation 8. 4Recall that because we consider only single-product reactions, all reaction nodes will have exactly one parent, making equation 15 well-defined. Details of this solution are presented in Appendix C.1. Like \( s(\cdot) \), \( \psi \) and \( \rho \) have recursive definitions, and can therefore be calculated with dynamic programming techniques. Since \( \psi \) depends on a node’s children, it can generally be calculated “bottom-up”, while \( \rho \) can be calculated “top-down” because it depends on a node’s parents. We discuss details of computing \( \psi \) and \( \rho \) in Appendix C.1, and provide a full worked-through example in Appendix C.2. However, in deriving \( \psi \) and \( \rho \) we assumed that all synthesis plans \( T \in P_\ast(G') \) were trees. In practice, this assumption may not hold (see Figure C.1 for an example). If this assumption is violated, \( \psi \) and \( \rho \) can both still be calculated, but will effectively double-count molecules which occur multiple times in a synthesis plan, and therefore not equal \( \tilde{\psi} \) and \( \tilde{\rho} \). This is a well-known issue in AND/OR graphs: for example, Nilsson (1982, page 102) describes the essentially same issue when calculating minimum cost synthesis plans. Ultimately we will simply accept this and use \( \psi/\rho \) instead of \( \tilde{\psi}/\tilde{\rho} \) despite their less principled interpretation, chiefly because the recursive definitions of \( \psi/\rho \) are amenable to efficient computation. Synthesis plans which use the same molecule twice are unusual in chemistry; therefore we do not expect this substitution to be problematic in practice. 4.2 Retro-fallback: A Full Greedy Algorithm Recall our original goal at the start of section 4.1, to estimate how expansion might affect SSP. We considered a single sample \( f \sim \xi_f \) and \( b \sim \xi_b \), and developed the function \( \rho \), which for each frontier molecule \( m \in F(G') \) gives the best estimated synthesis plan for \( m_* \) if \( m \) is expanded (simultaneously along with other frontier molecules on an optimally chosen synthesis plan). We will now use \( \rho \) to construct a full algorithm. Expanding a frontier molecule can improve SSP if, for samples \( f \) and \( b \) where \( s(m_*; G', f, b) = 0 \), the expansion changes this to 1. In this scenario, expanding a frontier molecule \( m^* \in \arg\max_{m \in F(G')} \rho(m; G', f, b, h) \) is a prudent choice, as it lies on a synthesis plan with the highest probability of “flipping” \( s(m_*; G', f, b) \) to 1. In contrast, because \( s(\cdot) \) will never decrease as nodes are added, if \( s(m_*; G', f, b) = 1 \) then it does not matter which molecule is expanded. Therefore, when aggregating over samples of \( f \) and \( b \) to decide which molecules to expand to improve SSP, we will consider the value of \( \rho \) only in cases when \( s(m_*; G', f, b) = 0 \). For our greedy algorithm, we propose to simply expand the molecule with the highest expected improvement of SSP. Letting \( 1_{(\cdot)} \) be the indicator function, this is a molecule \( m \in F(G') \) which maximizes \[ \alpha(m; G', \xi_f, \xi_b, h) = \mathbb{E}_{f \sim \xi_f, b \sim \xi_b} \left[ 1_{s(m_*; G', f, b) = 0} [\rho(m; G', f, b, h)] \right] \] In practice, \( \alpha \) would be estimated from a finite number of samples from \( \xi_f \) and \( \xi_b \). Using \( \rho \) to select a single molecule may seem odd, especially because \( \rho \) is defined as a hypothetical outcome of simultaneously expanding multiple nodes. However, note that in principle there is nothing problematic about expanding these nodes one at a time. We call our entire algorithm retro-fallback (from “retrosynthesis with fallback plans”) and state it explicitly in Algorithm 1. The sections are colour-coded for clarity. After initializing \( G' \), the algorithm performs \( L \) iterations of expansion (although this termination condition could be changed as needed). In each iteration, first the values of \( s \), \( \psi \), and \( \rho \) are computed for each sample of \( f \) and \( b \). Next, the algorithm checks whether there are no frontier nodes or whether the estimated SSP is 100%, and if so terminates (both of these conditions mean no further improvement is possible). Finally, a frontier node maximizing \( \alpha \) [16] is selected and expanded. Of course, a practical implementation of retro-fallback may look slightly different from Algorithm 1. We refer the reader to Appendix C for further discussion about the design and implementation of retro-fallback. 5 Related Work Retro-fallback is most comparable with other retrosynthesis search algorithms including MCTS (Segler et al., 2018), retro* (Chen et al., 2020), and proof number search (Heifets & Jurisica, 2012; Kishimoto et al., 2019). At a high level these algorithms are all similar: they use a heuristic to --- 5This order is chosen because \( s \) depends only on \( f \& b \), \( \psi \) depends on \( s \), and \( \rho \) depends on \( \psi \). Because the optimal algorithm to compute \( s, \psi, \rho \) may depend on \( G' \), we only specify this computation generically. Algorithm 1 Retro-fallback algorithm (see §4.2) Require: target molecule $m_*$, max iterations $L$, backward reaction model $B$, search heuristic $h$ Require: samples $f_1, \ldots, f_k \sim \xi_f$, $b_1, \ldots, b_k \sim \xi_b$ 1: $G' \leftarrow \{m_*\}$ 2: for $i$ in $1, \ldots, L$ do 3: for $j$ in $1, \ldots, k$ do 4: Compute $s(\cdot; G', f_j, b_j)$ for all nodes using equations 4–5 5: Compute $\psi(\cdot; G', f_j, b_j, h)$ for all nodes using equations 12–13 6: Compute $\rho(\cdot; G', f_j, b_j, h)$ for all nodes using equations 14–15 7: end for 8: Terminate early if $|F(G')| = 0$ OR $s(m_*; G', f_j, b_j) = 1 \forall j$ 9: $m_{(i)} \leftarrow \arg\max_{m \in F(G')} \alpha(m; G', \xi_f, \xi_b, h)$ (equation 16 breaking ties arbitrarily) 10: Add all reactions and molecules from $B(m_{(i)})$ to $G'$ 11: end for 12: return $G'$ guide the construction of an explicit search graph. However, previous algorithms may struggle to maximize SSP because their internal objectives consider only individual synthesis plans, while SSP depends on multiple synthesis plans simultaneously. In Appendix E.2, we argue that for most algorithms the best proxy for SSP is the success probability of individual synthesis plans, but illustrate in Appendix E.3 that this objective does not always align with SSP. In contrast, retro-fallback is specifically designed to maximize SSP. Mechanistically, retro-fallback most closely resembles retro* (Chen et al., 2020), which is a variant of the older AO* algorithm (Chang & Slagle, 1971; Martelli & Montanari, 1978; Nilsson, 1982; Mahanti & Bagchi, 1985). Both retro* and retro-fallback perform a bottom-up and top-down update to determine the value of each potential action, then select actions greedily. In fact, retro-fallback’s updates have cost-minimization interpretation, presented in Appendix C.1.4. The key difference between the algorithms is the node selection step: retro* considers just a single cost for each node, while retro-fallback aggregates over a vector of samples to directly optimize SSP. Lastly, we briefly comment on several research topics which are only tangentially related (deferring fuller coverage to Appendix F). Works proposing search heuristics for retrosynthesis search algorithms (F.1) complement rather than compete with our work: such heuristics could also be applied to retro-fallback. Generative models to produce synthesis plans (F.2) effectively also function as heuristics. Methods to predict individual chemical reactions are sometimes also referred to as “retrosynthesis models” (F.3), but solve a different problem than multi-step synthesis. Finally, other works have considered generally planning in stochastic graphs (F.5), but typically in a scenario where the agent is embedded in the graph. 6 EXPERIMENTS In this section we evaluate retro-fallback experimentally. The key question we seek to answer is whether retro-fallback does indeed maximize SSP more effectively than existing algorithms. We present additional results and explain details of the setup experimental in Appendix G. 6.1 Experiment Setup We have based our experiment design on the USPTO benchmark from Chen et al. (2020), which has been widely used to evaluate multi-step retrosynthesis algorithms. However, because this benchmark does not include a feasibility or buyability model we have made some adaptations to make it suitable for our problem setting. Importantly, because we do not know what the “best” feasibility model is, we instead test multiple feasibility models in the hope that the conclusions of our experiments could potentially generalize to future, more advanced feasibility models. We summarize the setup below and refer the reader to Appendix G.1 for further details. We base all of our feasibility models on the pre-trained template classifier from Chen et al. (2020) restricted to the top-50 templates. We vary our feasibility model across two axes: the marginal fea- Figure 2: Mean SSP across all 190 test molecules vs. time using the SA score heuristic. 3 trials are done for each molecule. Solid lines are sample means (averaged across molecules), and error bars represent standard errors. “ind.” means “independent”. We compare retro-fallback to breadth-first search (an uninformed search algorithm) and the heuristic-guided algorithms retro* (Chen et al., 2020) and MCTS (Segler et al., 2018; Genheden et al., 2020; Coley et al., 2019b). All algorithms were implemented using the SYNTHESUS library (Maziarz et al., 2023) and run with a fixed budget of calls to $B$. MCTS and retro* were configured to maximize SSP by replacing costs or rewards from the backward reaction model $B$ with quantities derived from $\xi_f$ and $\xi_b$ (see Appendices E.2 and G.1.5 for details). However, the presence of heuristics makes comparing algorithms difficult. Because the choice of heuristic will strongly influence an algorithm’s behaviour, we tried to use similar heuristics for all algorithms to ensure a meaningful comparison. Specifically, we tested an optimistic heuristic (which gives the best possible value for each frontier node) and a heuristic based on the synthetic accessibility (SA) score (Ertl & Schuffenhauer, 2009), which has been shown to be a good heuristic for retrosynthesis in practice despite its simplicity (Skoraczynski et al., 2023). The SA score heuristic was minimally adapted for each algorithm to roughly have the same interpretation (see Appendix G.1.6 for details). We tested all algorithms on the set of 190 “hard” molecules from Chen et al. (2020), which do not have straightforward synthesis plans. Our primary evaluation metric is the SSP values estimated with $k = 10,000$ samples, averaged over all test molecules. 6.2 How effective is retro-fallback at maximizing SSP? Figure 2 plots the average SSP for all test molecules as a function of the number of calls to the reaction model $B$ using the SA score heuristic. Retro-fallback clearly outperforms the other algorithms in all scenarios by a significant margin. The difference is particularly large for the feasibility models with no correlations between reactions (“ind.”). We suspect this is because the reaction model $B$ tends to output many similar reactions, which can be used to form backup plans when feasibility outcomes are independent. Retro-fallback will naturally be steered towards these plans. However, when GP-induced correlations are introduced, these backup plans disappear (or become less effective), since similar reactions will likely both be feasible or both be infeasible. The same trends are visible when using the optimistic heuristic (Figure G.4) and on a test set of easier molecules (Figure G.5). Overall, this result shows us what we expect: that retro-fallback maximizes the metric it was specifically designed to maximize more effectively than baseline algorithms. We investigate the origin of these performance differences in Appendix G.2.1 by plotting SSP over time for a small selection of molecules (repeated over several trials). It appears that, rather than retro- fallback being consistently a little bit better, the performance gap is driven by a larger difference for a small number of molecules. This is actually not surprising: the advantage of different approaches will vary depending on the graph, and for some graphs finding individual feasible plans is probably a promising strategy. A natural follow-up question is whether retro-fallback also performs well by metrics other than SSP. In Figures G.8–G.10 we plot the highest success probability of any individual synthesis plan found, plus two metrics frequently used by previous papers: the fraction of molecules with any synthesis plan (called “fraction solved” in prior works) and the length of the shortest synthesis plan found (a proxy for quality). The SSP of the single best plan is generally similar for all algorithms. This suggests that in general all algorithms find similar “best” plans, and retro-fallback’s extra success comes from finding more effective “backup” plans. Retro-fallback seems slightly better than other algorithms in terms of fraction solved and similar to other algorithms in terms of shortest plan length (although retro* is better in some cases). Finally, Appendix G.2.3 shows that retro-fallback is able to find synthesis plans which use the same starting molecules as real-world syntheses: a metric proposed by Liu et al. (2023b). Overall, these results suggest that retro-fallback is also an effective search algorithm if metrics from past papers which do not account for uncertainty are used. 6.3 Speed and Variability of Retro-fallback First we consider the speed of retro-fallback. Retro-fallback requires calculating $s$, $\psi$, and $\rho$ for every node at every iteration. The complexity of this calculation could scale linearly with the number of nodes in the graph (which we denote $|G'|$), or potentially sub-linearly if the $s/\psi/\rho$ values for many nodes do not change every iteration. Therefore, from this step we would expect a time complexity which is between linear and quadratic in $|G'|$. However, retro-fallback also requires sampling $f$ and $b$ for all nodes created during an expansion: a process which will scale as $O(1)$ for independent models and $O(|G'|^2)$ for GP-correlated models. This yields an overall $O(|G'|) - O(|G'|^3)$ complexity from the sampling step. Figure G.12 plots the empirical scaling for the experiments from the previous section, and suggests an overall scaling between $O(|G'|^{1.1}) - O(|G'|^{1.8})$, with considerable variation between different feasibility models and heuristics. To study the effect of the number of samples $k$ from $\xi_f$ and $\xi_b$, we run retro-fallback 10 times on a sub-sample of 25 molecules with a variety of different sample sizes. Figure G.13 shows that as $k$ decreases, the mean SSP value achieved by retro-fallback decreases and the variance of SSP increases. This is not surprising, since when the number of samples is small the internal estimates of SSP used by retro-fallback deviate more from their expected values, enabling suboptimal decisions. Empirically, $k > 100$ seems sufficient (minimal further improvement is seen for higher $k$). 7 Discussion, Limitations, and Future Work In this paper we reformulated retrosynthesis using stochastic processes, presented a novel evaluation metric called “successful synthesis probability” (SSP), and proposed a novel algorithm called retro-fallback which greedily maximizes SSP. In our experiments, retro-fallback was more effective at maximizing SSP than previously-proposed algorithms. Our work has some important limitations. Conceptually, chemists may also care about the length or quality of synthesis plans, and may only be willing to consider a limited number of backup plans. These considerations do not fit into our formalism. Practically, retro-fallback is slower than other algorithms and may not scale as well. We discuss these limitations further in Appendix H. The most important direction for future work is creating better models of reaction feasibility, as without high-quality models the estimates of SSP are not meaningful. We see collaborations with domain experts as the best route to achieve this. Since retro-fallback uses a search heuristic, learning this heuristic using the results of past searches (“self-play”) would likely improve performance. We elaborate on other potential directions for future work in Appendix I. Overall, even though retro-fallback is far from perfect, we believe that modelling uncertainty about reaction outcomes is at least a step in the right direction, and hope it inspires further work in this area. ETHICS Our work is foundational algorithm development and we do not see any direct ethical implications. The most likely use case for our algorithm is to automate the production of synthesis plans in drug discovery, which we hope can aid the development of new medicines. We acknowledge the possibility that such algorithms could be used by bad actors to develop harmful chemicals, but do not see this as a probable outcome: countless harmful chemicals already exist and can be readily obtained. It is therefore hard to imagine why bad actors would expend significant effort to develop new harmful chemicals with complicated syntheses. REPRODUCIBILITY We aim for a high standard of reproducibility in this work. We explicitly state our proposed algorithm in the paper (Algorithm 1) and dedicate Appendix C to discussing its minor (but still important) details, including guidance for future implementations (C.5). Proofs of all theorems are given in Appendix D. The experimental setup is described in more detail in Appendix G (including hyperparameters, etc). Code to reproduce all experiments is available at: https://github.com/AustinT/retro-fallback-iclr24 Our code was thoroughly tested with unit tests and builds on libraries which are widely-used, minimizing the chance that our results are corrupted by software errors. We include the results generated by our code in json format, and also include code to read the results and reproduce the plots from the paper. The inclusion of raw data will freely allow future researchers to perform alternative analyses. Note that this paper will be kept updated at https://arxiv.org/abs/2310.09270 AUTHOR CONTRIBUTIONS The original idea of SSP was proposed by Sarah and jointly developed by Sarah, Austin, Krzysztof, and Marwin. Sarah and Austin jointly developed an initial version of retro-fallback for AND/OR trees. Sarah originally proposed an algorithm using samples in a different context. Austin adapted these two algorithms to yield the version of retro-fallback proposed in this paper. Krzysztof proposed and proved Theorem 3.1. Writing was done collaboratively but mostly by Austin. All code was written by Austin with helpful code review from Krzysztof. Marwin and José Miguel advised the project. Marwin in particular provided helpful feedback about MCTS estimated feasibility of chemical reactions from the model. José Miguel provided extensive feedback on the algorithm details and the clarity of writing. ACKNOWLEDGMENTS Thanks to Katie Collins for proofreading the manuscript and providing helpful feedback. Austin Tripp acknowledges funding via a C T Taylor Cambridge International Scholarship and the Canadian Centennial Scholarship Fund. José Miguel Hernández-Lobato acknowledges support from a Turing AI Fellowship under grant EP/V023756/1. Austin is grateful for the affordable meals (with generous portion sizes) from Queens’ College Cambridge which greatly expedited the creation of this manuscript. REFERENCES John Bradshaw, Brooks Paige, Matt J Kusner, Marwin Segler, and José Miguel Hernández-Lobato. A model to search for synthesizable molecules. Advances in Neural Information Processing Systems, 32, 2019. --- 6 Note that because all algorithms in the paper use randomness, re-running the code is unlikely to reproduce our exact results. 7 Because we include the exact data, the reproduction of the plots will be exact. We were inspired to include this by the thought-provoking paper of Burnell et al. (2023).
RIbH5ekQpr
- The MPL2D metric measures the mean euclidean distance between image and caption embeddings, meant to capture the semantic diversity to justify the aforementioned diversity claims. There may be some concerns regarding this formulation which I would like to give the authors a chance to verify any possible misconceptions on my end: - If these are the embeddings used in the CLIP contrastive cosine distance loss, then it should be specified whether they are normalized inner products.
IMP: Benchmarking Image Polysemy in Vision-Language Models Anonymous authors Paper under double-blind review Abstract Current vision-language models predominantly use contrastive losses to learn from the co-occurrence of image and text. While effective for certain tasks, this approach assumes semantic equivalence between these two modalities. This assumption runs counter to the diverse meanings that a single image can convey, which in turn may compromise visual understanding. To investigate the impact of this assumption, we introduce a novel dataset: IMP, designed to challenge and evaluate vision-language models on image polysemy. Our empirical results reveal that current models fall short in recognizing the multiple semantic dimensions of images, underscoring the need for more robust approaches for learning vision-language representations. Code and data will be made available on publication. 1 Introduction Vision-language models (VLM) have made great strides in recent years by leveraging image caption datasets (Radford et al., 2021; Singh et al., 2022; Li et al., 2023; Changpinyo et al., 2021). The use of captions is highly promising, as the language that accompanies images is an incredibly rich source of supervision; it may, for example, describe both the objects and the relations between them (Lin et al., 2015). In this sense, it is clear that captions are a more natural means of describing images than the typical annotation schemes used for large-scale image datasets. Moreover, approaches for learning from image-text pairs have achieved impressive results with a relatively simple contrastive mechanism that pushes matching pairs together and mismatching pairs apart (Radford et al., 2021; Kim et al., 2021; Li et al., 2021). While effective, this mechanism relies on the strong assumption that the caption text is descriptive of the image, which may not be the case for naturally occurring images and their accompanying text. Multimodality is a well-researched principle (Van Leeuwen, 2015), establishing the intricacies of how modalities interact; noting that even when expressing the “same” meaning, modalities may do so differently due to the affordances of each modality. Critical in this observation is the notion of sameness: for curated datasets like MSCOCO (Lin et al., 2015) the annotation process was designed such that the captions are descriptive of the image; however, for naturally occurring image-text pairs it cannot be assumed that this is equally the case. Nonetheless, existing approaches assume that co-occurrence equates to semantic sameness and aim to learn from large collections of web-scraped image-text pairs (Lu et al., 2019; Radford et al., 2021; Changpinyo et al., 2021). Images may convey different meanings, i.e., they may be polysemic, and the meaning they have in communication depends on how they are used (Kress & Van Leeuwen, 2006). This use is partially established by the text that accompanies an image, therefore each pairing of an image with multiple captions may convey a different meaning - the captions anchor the image to a meaning. Crucially, during this anchoring process, the text does not have to be descriptive of the image, their pairing may be purely associative; by for instance conveying a similar emotion or alluding to a similar abstract concept. Establishing this principle of image polysemy is useful when we consider the prevalence of contrastive learning in vision-and-language (Radford et al., 2021; Li et al., 2021), as these models are optimised by pulling together matching image-text pairs and pushing apart mismatching pairs, thereby inadvertently pulling together all captions paired with an image, as well as the captions for neighbouring images (Song & Soleymani, 2019). Existing approaches for this issue treat the lack of consistency between captions as noise (Santurkar et al., 2022); instead we argue that to make full use of the richness of image-text pairings it is necessary to account for image polysemy. Figure 1: Example images from IMP (top row) and MSCOCO (bottom row). Whilst the datasets contain similar images, the captions for IMP are both descriptive and conceptual, whereas the MSCOCO captions are purely descriptive. We introduce a novel IMage Polysemy benchmark, IMP, that challenges the prevailing assumption in vision-and-language models. Unlike traditional datasets that focus on descriptive captions, IMP includes diverse captions that range from descriptive to conceptual, thereby embracing the polysemic nature of images. This not only allows for a more nuanced understanding of image-text relationships but also serves as a rigorous benchmark for evaluating the adaptability and robustness of existing models to variations in caption semantics. By doing so, we address a gap in existing vision-and-language research, encouraging the community to move beyond purely descriptive paradigms and explore the rich, multifaceted interplay between visual and textual modalities. Our contributions are as follows: • A dataset of images with diverse captions, from descriptive to conceptual, to highlight the polysemic nature of images. • A large-scale evaluation of existing VLM, exposing the limitations of existing learning paradigms for dealing with image polysemy. 2 RELATED WORK 2.1 POLYSEMY The phenomenon of polysemy, where a single form can have multiple meanings can be found for both textual and visual data (Chen et al., 2015; Yao et al., 2018; Saenko & Darrell, 2008). Understanding polysemy is crucial for VLM to achieve robust and nuanced representations across different modalities. Within natural language processing (NLP), polysemy has been studied as Word Sense Disambiguation (WSD) - aimed at resolving the ambiguity of words across contexts (Navigli, 2009). With the rise of large-scale self-supervised pre-training, there have been significant improvements in WSD. In particular, the switch to contextual word embeddings has resulted in large improvements in disambiguation accuracy (Scarlini et al., 2020). As such, in contextualised models like BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) the ability to handle word polysemy is inherent; which has greatly contributed to the success of these models. Polysemy in vision is a multimodal problem, as image context may present itself across various modalities. This requires that VLM understand the complex interplay between the visual and its contextual data. Interaction between modalities is particularly relevant in vision-language tasks, which often aim to generate or match textual descriptions of visual content (Baltrušaitis et al., 2017). Traditionally, works in computer vision have focused on categorisation tasks, such as object detection and classification, which were designed to be unambiguous, thereby overlooking the polysemous nature of images (Forsyth & Ponce, 2002). Central to many computer vision, and vision-language, approaches is training on human-annotated datasets like MSCOCO (Lin et al., 2015), which requires extensive manual effort. Moreover, to reduce effort and increase annotator agreement, such datasets were designed around straightforward and unambiguous tasks. As a consequence, models trained on human-annotated datasets are effective for object-based descriptions, but they struggle to capture the nuance across multiple interpretations that emerge from differences in context. Large-scale vision-language pre-training has many of the same ingredients that enabled NLP to make great improvements in dealing with word polysemy: self-supervised learning, web-scale datasets, and context (Radford et al., 2021; Kim et al., 2021; Li et al., 2021). However, a major difference between NLP and vision-language approaches is the prevalence of contrastive learning in vision-language (Radford et al., 2021; Li et al., 2021; Yu et al., 2022). The underlying assumption for contrastive vision-language learning is that the text and image express the same meaning, and can therefore be projected to the same point in latent space (Song & Soleymani, 2019). In Santurkar et al. (2022), it is shown this assumption may inhibit training when presented with captions which are not descriptive or have high variability. Santurkar et al. (2022) propose a solution that aims to reduce variability, instead we pose that this variability may simply be due to different meanings conveyed by the image. As illustrated in Figure 6, various captions may be valid for an image whilst conveying different meanings - discarding these reduces the richness from which we can learn. ### 2.2 Vision-Language Representation Learning Prior research within vision-language representation learning can be broadly classified into two categories: earlier approaches that rely on task-specific fine-tuning of unimodal models, and more recent works that explicitly perform cross-modal training. VLM in the first category often leverage representations from pre-trained unimodal models, such as convolutional neural networks (Krizhevsky et al., 2017) trained on ImageNet (Russakovsky et al., 2015) or long short-term memory (Hochreiter & Schmidhuber, 1997) trained on extensive text corpora (Karpathy & Fei-Fei, 2015; Agrawal et al., 2016; Anderson et al., 2018). These models tackle task-specific challenges using supervision derived from loss functions tailored to specific datasets, such as triplet loss for image-text retrieval on MSCOCO (Lin et al., 2015). While effective for these tasks, these models often struggle with generalization to different tasks (Karpathy & Fei-Fei, 2015; Agrawal et al., 2016). More recently, the second category has gained momentum with VLM that focuses on training from large-scale datasets such as Conceptual Captions (CC3M and CC12M) (Sharma et al., 2018; Changpinyo et al., 2021), and LAION 400M and 5B (Schuhmann et al., 2021, 2022). Contrastive learning is central to this large-scale training, as it aims to optimize the similarity between matching pairs and minimize it for mismatching pairs, thereby addressing key challenges in vision-language representation learning. An observation concerning these contrastive approaches, also made by Song & Soleymani (2019), is that forcing multiple meanings to a single point can have negative influence on learning as it artificially compresses the embedding space, and reduces nuance between meanings. On datasets like MSCOCO (Lin et al., 2015) and Flickr30k (Plummer et al., 2016), this has a limited impact as their captions are highly descriptive of the image, but this does not extend to real world image-text pairs which may convey diverse meanings. As existing benchmarks are inadequate for evaluating this, it necessitates the development of datasets that address image polysemy. A related problem to polysemy, as focused on by Song & Soleymani (2019), is that of partial matching between image and text, as in multi-view embedding (Ren et al., 2015; Li et al., 2022c). The proposed solution by Song & Soleymani (2019) focuses on learning multiple local representations and matching these to the paired text, thereby primarily addressing this partial matching problem. Instead, we argue that polysemy may occur even when considering the same local or global views, and as such a multi-view approach does not sufficiently address this. To demonstrate the limitations of existing learning paradigms and to draw attention to the notion of image polysemy we propose a benchmark to evaluate VLM across diverse captions. Several datasets have been proposed to test models beyond conventional tasks. For instance, the Hateful Meme challenge (Kiela et al., 2021) aims to test models on detecting hateful contents from the interaction between image and text. Similarly, Theisen et al. (2020) study memes with the aim of automatically discovering political meme genres. Whilst polysemy is found in memes, in general the text in memes is intended to be complementary to the image, thereby not fitting the frame of caption. More related to our focus, Akula et al. (2023) propose a set of vision tasks on visual metaphor understanding, which require understanding of the image and the text. Improving the ability of VLM to deal with polysemy may also aid in visual metaphor understanding, their proposed dataset has a strong focus on objects, which may obscure proper assessment of polysemy. 3 IMP: A Benchmark of Image Polysemy Figure 2: IMP data curation pipeline. CLIP is used as image encoder to select visually similar images from existing datasets and gather their captions; Google Vision API is used to search web-entity for each image, return the website containing the identical image and collect these website titles; Candidate captions are then cleaned, annotated, and finally five captions are selected automatically with maximal diversity for each image. We introduce a novel benchmark for evaluating image polysemy; IMP-5k, that pairs images with five captions that may range from descriptive to conceptual. The pipeline for constructing the dataset is shown in Figure 2. The images utilized for this dataset were curated from Unsplash\footnote{https://unsplash.com/}, a platform renowned for its high-quality stock photography. In addition, candidate captions were gathered from two sources: existing datasets and through web curation. Captions from existing datasets were collected from visually similar images, which resulted in a set of captions (if considered relevant during annotation) that was more conceptual. CLIP-ViT-G/14 (Rharco et al., 2021) was used as the image encoder to get the embedding of the candidate image from Unsplash and images from CC3M and CC12M. We computed image-to-image cosine similarity is used to retrieve the top 3000 visually similar images for each candidate image with the similarity threshold higher than a pre-defined number (0.9 in our setting). Captions from retrieved images were used as candidate captions, and we computed text-to-text similarity as well as tokens to filter out almost identical captions. Through web curation we collected captions from websites containing identical versions of the candidate image, this allowed us to incorporate diverse real-world interpretations of the image. To ensure the quality of the gathered captions, they were subsequently checked in a cleaning and annotation process, which filtered... out captions of poor quality or those which were not considered relevant to the image. From the remaining captions, we automatically selected five captions for each image by optimising for diversity across the captions. More details on the cleaning, annotation, and auto-selection can be found in Appendix A. Table 1 shows the statistics of IMP compared to existing pre-training and fine-tuning scale datasets. IMP-5k has 25k captions selected from more than 400k valid captions with maximal diversity through clustering. The term IMP and IMP-5k will be used interchangeably in later sections. For fine-tuning, we split a set of 18k images to a 17k training set (noisyIMP-17k) with 10 captions and a 1k (noisyIMP-1k) validation set with 5 captions. Captions in noisyIMP-17k were selected based on the same rule as IMP-5k, and severe outliers (meaningless captions) were excluded by additional human check. The number of image samples in noisyIMP-17k is comparable to Flickr30k (Plummer et al., 2016), whereas the size of test split is the same as the MSCOCO test set (Lin et al., 2015). The noisyIMP-1k set is used for hyperparameter tuning and model selection. In terms of average caption length IMP is comparable to other fine-tuning datasets, while it has a higher standard deviation as IMP contains captions from existing pre-training datasets. Table 1: Statistics of IMP compared to existing pre-training and fine-tuning datasets with official or commonly used train-test splits. MPL2D is the mean L2 distance between images and their paired captions in the embedding space, averaged over the whole dataset. | Statistics | Pre-training | Fine-tuning | Benchmark | |---------------------|--------------|-------------|-----------| | | CC3M | SBU | RedCaps(20) | Flickr8k | Flickr30k | MSCOCO | IMP (ours) | | Unique images | 2.8M | 11.2M | 849k | 3.1M | 8k | 29k | 1k | 113K | 5k | 5k | | Caption(s) per image| 1 | 1 | 1 | 1 | 5 | 5 | 5 | 5 | 5 | 5 | | Avg caption length | 11.73 ± 4.21 | 19.37 ± 15.25 | 12.06 ± 5.27 | 12.14 ± 10.63 | 11.78 ± 3.89 | 11.63 ± 3.24 | 10.78 ± 3.01 | 11.13 ± 5.13 | | Web curation | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✓ | ✓ | ✓ | | Human annotation | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | MPL2D | 1.416 ± 0.317 | 1.393 ± 0.370 | 1.263 ± 0.026 | 1.312 ± 0.109 | 1.163 ± 0.021 | 1.167 ± 0.021 | 1.170 ± 0.022 | 1.221 ± 0.031 | To measure the diversity among the captions, we compute the mean paired L2 distance (MPL2D) across the whole dataset. This score considers the image embedding as the cluster centroid, and denote text embeddings (from all paired captions) as points in the cluster. To reduce the bias from model’s pre-training dataset, the embeddings are obtained with CLIP-ViT-B/32 (Radford et al., 2021) with four different checkpoints, and ALIGN (Jia et al., 2021). MPL2D is computed for IMP, the test set of fine-tuning datasets, and the whole pre-training datasets, for RedCaps (Desai et al., 2021) we evaluate on the year 2020 split. From obtained MPL2Ds we observe that IMP benefits from the two ways of data collection combined with manual curation, as IMP has a larger MPL2D than MSCOCO and Flickr30k which both have multiple captions for each image. Pre-training datasets can have larger MPL2D score and standard deviation, this is mainly due to noisy captions, and the fact that each image is paired with only one caption. It’s worth mentioning that SBUcaptions (Ordonez et al., 2011) has lower MPL2D and much lower standard deviation than CC3M and CC12M. The main reason for this is that SBUcaptions uses Flickr queries and filtering the noisy results, thus most captions are descriptive. RedCaps (Desai et al., 2021) is a web-curated dataset where the image-text pairs are from different subreddits (tags, group spaces). RedCaps can naturally group visually unrelated images through a common semantic meaning which is the subreddits, and also allow images which share the same objects to have different captions. For further qualitative comparison, we show four example from IMP and MSCOCO each in Figure 1, demonstrating the visual similarity between the datasets. However, when comparing the captions between these datasets we see that the captions for IMP are more diverse, incorporating highly conceptual captions such as “No care in the world” as well as descriptive captions as “Steam from a morning cup of tea or coffee”. In contrast, MSCOCO features only descriptive captions, which focus strongly on the main object(s). By moving beyond pure description we enable richer exploration of captions, allowing IMP to serve as a comprehensive benchmark to evaluate the ability of VLM on image polysemy. 4 EXPERIMENTS In this section, we present the results of two cross-modal experiments (zero-shot and fine-tuned) to evaluate the performance of VLM on IMP, and provide both quantitative and qualitative analysis. As cross-modal retrieval aims to retrieve the most relevant image given a text query, or vice versa, it allows for direct testing of the alignment between the two modalities, thus measuring the potential for contrastive learning to deal with image polysemy. Models. We categorize the VLM evaluated on IMP into three groups based on their architecture (Awais et al., 2023): dual-encoder, fusion, and other. Dual-encoder models are composed of separate encoders for image and text, the loss is computed using between the outputs of these two encoders. We evaluate the following dual-encoder VLM: CLIP (Radford et al., 2021), ALIGN (Lia et al., 2021), AltCLIP (Chen et al., 2022), ConvNeXt-CLIP (Liu et al., 2022), and ALBEP (Li et al., 2021). Fusion models have a module that combines the image and text features, in addition to the two encoders, which allows for richer pre-training tasks. We evaluate the following fusion VLM: BLIP (Li et al., 2022b), FLAVA (Singh et al., 2022), and Coca (Yu et al., 2022). Additionally, we evaluate the following other VLM: EVA-02 with encoder-decoder (Fang et al., 2023), BLIP2 (Li et al., 2023), which uses a frozen LLM, ImageBind (Girdhar et al., 2023), which uses multiple modalities of paired data. SetEmbedding (Kim et al., 2022), which uses slot attention (Locatello et al., 2020) for multi-view cross-modal retrieval. Because of the unavailability of pre-training checkpoints and/or implementation, other state-of-the-art (SOTA) models such as Florence (Yuan et al., 2021) and FILIP (Yao et al., 2021), are not included. All checkpoints are either obtained from their official repositories (Radford et al., 2021; Ilharco et al., 2021; Wightman, 2019; Li et al., 2022a) or the HuggingFace model hub. Metrics. For evaluation, we use Recall@K (R@K) metric, which is the percentage of queries that have at least one relevant item in the top-K retrieved items. We also report the RSUM following (Chen et al., 2021; Kim et al., 2022), which is the sum of Recall@K for K = 1, 5, 10 from both image-to-text and text-to-image retrieval tasks. Further results on Median Rank (MedR) and Mean Rank (MeanR) can be found in Appendix C. 4.1 ZERO-SHOT EVALUATION We report the results of zero-shot evaluation of the VLM on IMP in Table 6. Since most of models in the table has the same or larger visual backbone than ViT-L/14, we pick CLIP with ViT-L/14 pre-trained on CLIP400M (Radford et al., 2021) as the baseline for comparison. Overall, we observe incorporating additional losses (such as COCA with captioning loss) and tasks (such as FLAVA with multiple pre-training tasks) benefits the performance of the VLM, additionally model size and pretraining is a factor as well. EVA-02-L/14 achieves the best RSUM score and best image-to-text performance out of all models. When comparing the two retrieval tasks, we consistently see that models perform much better on image-to-text than on text-to-image, which can be explained by the VLM doing well on matching images to descriptive captions, but struggling when matching conceptual captions to images. From the highlighted results we observe that model with higher RSUM almost always have higher individual recall score, one exception is ImageBind, which has a lower image-to-text recall but higher text-to-image recall than EVA-02-L/14. Surprisingly, BLIP2-g using the larger ViT-g/14 image encoder is outperformed by BLIP2-ViT-L with the ViT-L/14. Moreover, BLIP2-g-COCO (BLIP2 with ViT-g/14 from EVA-CLIP (Fang et al., 2023), trained on MSCOCO) performs better on image-to-text, but worse on text-to-image. These observations further highlight the importance of model size and training data. To study their effects, we compare the results of selected CLIP variants in Table 3 with more results in Appendix C. When comparing model size, we find that VLMs with greater model size achieve better RSUM scores. This aligns with observations of CLIP on other datasets (Radford et al., 2021), demonstrating that this asymptotic nature also applies to the image polysemy setting. However, we do see observe an interaction here with the training data. For instance, in the DataComp1B (Gadre et al., 2023) setting, model performance drops significantly more when the model size decreases, indicating that smaller models trained on DataComp1B have lower generalization ability on IMP than the same model trained on LAION2B. Table 2: Recall@K (R@K) scores for zero-shot cross-modal retrieval on IMP. Evaluation results on both 1K test setting (average of 5-fold test dataset) and 5K test setting are presented. The best results within each recall column are highlighted with bold text. The best results within each group of models are highlighted with underline. | Method | Image-to-Text | Text-to-Image | RSUM | Image-to-Text | Text-to-Image | RSUM | |-----------------|---------------|---------------|------|---------------|---------------|------| | CLIP-L/14 | 24.4 | 51.8 | 64.5 | 15.1 | 36.9 | 48.9 | | AltCLIP | 27.5 | 55.3 | 68.2 | 16.8 | 39.5 | 51.7 | | ALIGN | 28.2 | 55.3 | 68.3 | 16.7 | 40.0 | 51.7 | | CovNeXt-CLIP | 29.8 | 58.2 | 71.0 | 18.3 | 42.0 | 53.4 | | ALBEF | 21.3 | 46.8 | 59.2 | 5.8 | 15.4 | 26.4 | | BLIP | 23.1 | 49.5 | 62.1 | 15.4 | 36.8 | 48.4 | | FLAVa | 26.5 | 53.8 | 66.4 | 16.7 | 38.8 | 50.6 | | COCA | 28.5 | 57.8 | 70.6 | 16.7 | 39.9 | 51.7 | | BLIP2-g | 23.6 | 51.6 | 65.4 | 15.2 | 38.2 | 50.7 | | BLIP2-g+COCO | 24.4 | 51.2 | 64.4 | 14.9 | 36.6 | 48.5 | | BLIP2-g+ViT-L | 28.4 | 57.5 | 69.5 | 17.2 | 40.8 | 52.8 | | ImageBind | 29.1 | 57.46 | 70.5 | 18.8 | 42.8 | 54.4 | | EVA-02-L/14 | 31.1 | 59.5 | 71.8 | 18.5 | 41.6 | 53.2 | Across datasets, Laion400M (Schuhmann et al., 2021) has the same dataset size as CLIP400M (Radford et al., 2021), yet the same model trained on Laion400M performs better on IMP, this may hint at greater diversity in Laion400M. As we can observe from comparing Laion400M to Laion2B, the English language subset of LAION5B (Schuhmann et al., 2022), we obtain better performance on IMP with the same model trained on larger datasets. Moreover, due to the dataset size, Laion2B allows training of larger models, CLIP with ViT-g/14, ViT-G/14 and ViT-H/14 (Zhai et al., 2022) thus achieve higher performance than other smaller models. The performance gap between CLIP-ViT-L/14 trained on DataComp1B and Laion2B is subtle. Table 3: Recall@K (R@K) scores for zero-shot cross-modal retrieval on IMP using CLIP across different ViT sizes and pre-training datasets. Evaluation results on both 1K test setting (average of 5-fold test dataset) and 5K test setting are presented. | Method | Image-to-Text | Text-to-Image | RSUM | Image-to-Text | Text-to-Image | RSUM | |-----------------|---------------|---------------|------|---------------|---------------|------| | CLIP400M | | | | | | | | RN50 | 102M | 25.1 | 52.6 | 65.6 | 14.9 | 36.8 | | RN101 | 120M | 24.7 | 53.1 | 65.7 | 15.0 | 36.4 | | B/32 | 150M | 23.5 | 51.7 | 64.5 | 15.1 | 37.2 | | B/16 | 150M | 25.1 | 52.7 | 66.1 | 15.6 | 37.7 | | L/14 | 428M | 24.4 | 51.8 | 64.5 | 15.1 | 36.9 | | L/14-336 | 428M | 26.5 | 53.3 | 66.5 | 15.7 | 38.0 | | Laion400M | | | | | | | | B/32 | 150M | 26.7 | 55.1 | 68.3 | 15.8 | 38.5 | | B/16 | 150M | 28.0 | 56.0 | 70.1 | 16.9 | 39.7 | | L/14 | 428M | 28.7 | 57.4 | 70.5 | 17.7 | 40.8 | | DataComp1B | | | | | | | | B/32 | 150M | 15.4 | 37.1 | 50.3 | 9.1 | 25.1 | | B/16 | 150M | 27.2 | 55.9 | 69.0 | 16.0 | 38.4 | | L/14 | 428M | 29.6 | 57.9 | 71.0 | 18.5 | 41.9 | | LAION2B | | | | | | | | B/32 | 150M | 29.0 | 59.0 | 71.5 | 17.5 | 40.7 | | B/16 | 150M | 28.6 | 58.2 | 70.8 | 17.9 | 41.3 | | L/14 | 428M | 29.6 | 58.0 | 70.4 | 18.4 | 42.7 | | H/14 | 986M | 29.2 | 57.4 | 70.6 | 18.7 | 42.7 | | g/14 | 1.36B | 30.7 | 59.4 | 72.7 | 20.0 | 44.1 | | G/14 | 2.53B | 28.1 | 57.4 | 69.6 | 18.9 | 42.7 | Despite the general behavior of performance scaling with both dataset size and model size, we find some exceptions. For instance, consider models trained on LAION2B, CLIP with ViT-L/14 performs better than models with larger ViT-H/14 and ViT-G/14 image encoders. Similarly, ViT-g/14 which is the half-precision version of ViT-G/14, has the highest performance across all models. ### 4.2 Finetuning Evaluation The results of finetuning CLIP variants trained on CLIP400M are reported in Table 7 using three different methods: linear-probing, parameter-efficient fine-tuning (PEFT) (Hu et al., 2021), and full fine-tuning. We use the same hyperparameters (except for learning rate) and setup as in Radford et al. (2021) and Dong et al. (2022) for all methods. We set the base learning rate $5e^{-4}$ for linear-probing, $1e^{-5}$ for full fine-tuning, and $1e^{-4}$ for PEFT. These rates were selected using CLIP-ViT-B/32 and grid search as in Radford et al. (2021), with learning rates ranging from $1e^{-3}$ to $1e^{-6}$ and evaluating on the 1k validation set. For linear-probing, we test two commonly used strategies: finetuning the last transformer layer, which we report on in Table 7, and adding an additional linear layer ontop of the frozen CLIP model, which we report on in Appendix C. PEFT can be considered an intermediate strategy between linear-probing and fine-tuning, as it adds trainable parameters to each layer while keeping the original parameters frozen. For PEFT, we use LoRa (Hu et al., 2021) and adopt the default hyperparameters from Sourab Mangrulkar et al. (2022). Additional details can be found in Appendix A. We use triplet loss with margin 0.1 as the loss function for all methods. | Method | Image-to-Text | Text-to-Image | RSUM | Image-to-Text | Text-to-Image | RSUM | Image-to-Text | Text-to-Image | RSUM | |--------|---------------|--------------|------|---------------|--------------|------|---------------|--------------|------| | RN50 | 6.2 | 19.0 | 29.8 | 4.5 | 15.3 | 24.3 | 90.1 | 9.6 | 36.9 | | RN101 | 6.4 | 20.2 | 30.9 | 4.5 | 15.5 | 24.7 | 102.1 | 10.0 | 37.5 | | B/32 | 10.1 | 28.7 | 42.5 | 6.8 | 22.2 | 33.8 | 144.2 | 11.8 | 30.7 | | B/16 | 10.3 | 29.3 | 42.1 | 7.3 | 22.9 | 34.1 | 146.0 | 12.7 | 33.3 | | L/14 | 12.3 | 31.8 | 46.1 | 8.0 | 25.1 | 37.1 | 160.5 | 14.0 | 33.8 | | L-14-336| 14.4 | 34.2 | 46.9 | 9.4 | 26.4 | 38.2 | 167.5 | 15.9 | 37.2 | Overall, the performance of CLIP is improved by fine-tuning, with full fine-tuning achieving the best overall scores. Nonetheless, the performance increase of all three methods is minimal, obtaining R@K and RSUM significantly lower than zero-shot on MSCOCO. Across the finetuning settings, the results again exhibit an asymptotic trend; increasing the model size and the number of parameters to be finetuned leads to better performance on the cross-modal retrieval task. Meanwhile, one notable exception is when performing linear probing on RN50 and RN101, where the resulting scores significantly degraded and are lower than the zero-shot scores. These observations suggest that the challenges presented by IMP go beyond being a domain shift, and demonstrate a clear limitation in existing learning paradigms. An approach which may aid in addressing image polysemy is multi-view cross-modal retrieval as in Song & Soleymani (2019). To this end we implement the SOTA multi-view approach, SetEmbedding (Kim et al., 2022), with SE-101 model consists of ResNeXt-101 and BERT. Apart from the original design of SetEmbedding models, we further use CLIP-ViT-B/32 as the backbone to tests a new variant of SetEmbedding, SE/32. We choose linear-probing (finetuning the last layer of encoders along with slot attention modules), and full fine-tuning for training SetEmbedding models, as the added modules are not pre-trained. The results can be seen in lower half of Table 7. We find that SE-101 has lower performance than CLIP-ViT-B/32 in both the linear probing and the full fine-tuning scenarios. However, SE/32 has a lower score in linear probing case but a higher score in the full fine-tuning case, resulting in text-to-image performance that is higher than CLIP-ViT-B/16. This relatively high text-to-image recall performance is different from the other VLM, which may indicate that multi-view models are better at handling polysemy than single-view models. Nevertheless, a performance gap remains. 4.3 Qualitative Analysis Figure 3: Example of hard captions in image-text matching task from IMP. “Hard” indicates the caption is still wrongly predicted as mismatching after fine-tuning. For further analysis, we implement a simple image-text matching task where we predict whether the image and text are matched or not by passing a concatenation of the image and text embedding to a linear layer followed by a softmax output. The pre-trained image text matching uses frozen CLIP-ViT-B/32 as feature extractor and only train the linear layer, which directly uses the cosine similarity computed by CLIP to predict. The binary cross entropy loss is added along with the triplet loss during fine-tuning, with CLIP-ViT-B/32 (unfreeze last layer) as the backbone and all other settings are the same as in the fine-tuning evaluation in Section 4.2. As we are particularly interested in determining if models can recognise whether highly conceptual captions are correctly matched to images, we focus on the false negative rate (FNR), which measures whether captions are incorrectly identified as mismatching. The results of this analysis show that the FNR before fine-tuning on the image-text matching task is 14.6%, and after is 6.5%. As such, fine-tuning does improve the models capabilities to deal with challenging captions, but the FNR remains fairly high (as the result after fine-tuning on MSCOCO is 1.1%). We show a few examples which are still wrongly predicted after training in Figure 3, which we named them “hard captions”. For instance, the second caption “I guess my bill of electricity will be much higher than before” was predicted as mismatching, while a human would likely consider this a valid caption for the image. Additional hard caption examples and analysis can be found in Appendix D. 5 Conclusion We propose IMP, a new benchmark to challenge the capability of VLMs on image polysemy, which is the phenomenon that a single image may convey multiple different meanings. IMP consists of 23k images with diverse captions curated from the web and through manual annotation. We evaluated a wide-range of SOTA VLM models on IMP in both zero-shot and finetuning settings, and find that existing models struggle to learn from polysemous image-text pairs. Furthermore, we tested if a multi-view approach may aid in overcoming this issue, and find that it similarly struggles, but that it achieves relatively better text-to-image retrieval performance, which we regarded as crucial for understanding image polysemy. In this work we emphasised the polysemous nature of images and demonstrated how existing learning paradigms for vision-language struggle in addressing it. Our hope is that IMP can serve as a benchmark for future research on image polysemy and shape improvements in vision-language representation learning. REFERENCES Aishwarya Agrawal, Jiasen Lu, Stanislaw Antol, Margaret Mitchell, C. Lawrence Zitnick, Dhruv Batra, and Devi Parikh. VQA: Visual Question Answering. *axXiv preprint 1505.00468*, 2016. Arjun R. Akula, Brendan Driscoll, Pradyumna Narayana, Soravit Changpinyo, Zhiwei Jia, Suyash Damle, Garima Pruthi, Sugato Basu, Leonidas Guibas, William T. Freeman, Yuanzhen Li, and Varun Jampani. MetaCLUE: Towards Comprehensive Visual Metaphors Research. *axXiv preprint 2212.09898*, 2023. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering. In *2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 6077–6086. IEEE, 2018. ISBN 978-1-5386-6420-9. Muhammad Awais, Muzammal Naseer, Salman Khan, Rao Muhammad Anwer, Hisham Cholakkal, Mubarak Shah, Ming-Hsuan Yang, and Fahad Shahbaz Khan. Foundational Models Defining a New Era in Vision: A Survey and Outlook. *axXiv preprint 2307.13721*, 2023. Tadas Baltrušaitis, Chaitanya Ahuja, and Louis-Philippe Morency. Multimodal Machine Learning: A Survey and Taxonomy. *axXiv preprint 1705.09406*, 2017. Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts. *axXiv preprint 2102.08981*, 2021. Jiacheng Chen, Hexiang Hu, Hao Wu, Yuning Jiang, and Changhu Wang. Learning the Best Pooling Strategy for Visual Semantic Embedding. In *2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 15784–15793, 2021. Xinlei Chen, Alan Ritter, Abhinav Gupta, and Tom Mitchell. Sense discovery via co-clustering on images and text. In *2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 5298–5306. IEEE, 2015. ISBN 978-1-4673-6964-0. Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, and Ledell Wu. AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities. *axXiv preprint 2211.06679*, 2022. Karan Desai, Gaurav Kaul, Zubin Trivadi Aysola, and Justin Johnson. Redcaps: Web-curated image-text data created by the people, for the people. In *Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)*, 2021. URL https://openreview.net/forum?id=VjJxBilp9zh. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. *axXiv preprint 1810.04805*, 2019. Xiaoyi Dong, Jianmin Bao, Ting Zhang, Dongdong Chen, Shuyang Gu, Weiming Zhang, Lu Yuan, Dong Chen, Fang Wen, and Nenghai Yu. CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet. *axXiv preprint 2212.06138*, 2022. Yuxin Fang, Quan Sun, Xinggang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao. EVA-02: A Visual Representation for Neon Genesis. *axXiv preprint 2303.11331*, 2023. David A. Forsyth and Jean Ponce. *Computer Vision: A Modern Approach*. Prentice Hall Professional Technical Reference, 2002. ISBN 978-0-13-085198-7. Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, Eyal Orgad, Rahim Entezari, Giannis Daras, Sarah Pratt, Vivek Ramanujan, Yonatan Bitton, Kalyani Marathe, Stephen Mussmann, Richard Vencu, Mehdi Cherti, Ranjay Krishna, Pang Wei Koh, Olga Saukh, Alexander Ratner, Shuran Song, Hannaneh Hajishirzi, Ali Farhadi, Romain Beaumont, Sewoong Oh, Alex Dimakis, Jenia Jitsev, Yair Carmon, Vaishaal Shankar, and Ludwig Schmidt. DataComp: In search of the next generation of multimodal datasets. *axXiv preprint 2304.14108*, 2023.
ZS4m74kZpH
If I understand correctly, you use the irrelevant context (e.g., in the single-hop case) to train the LM to answer the question by ignoring the context. Isn't this (almost) the definition of hallucination? The resulting LM will produce information not grounded in any passages. Isn't it better to abstain / request a new query, if the context is irrelevant?
MAKING RETRIEVAL-AUGMENTED LANGUAGE MODELS ROBUST TO IRRELEVANT CONTEXT Ori Yoran\textsuperscript{1} \quad Tomer Wolfson\textsuperscript{1,2} \quad Ori Ram\textsuperscript{1} \quad Jonathan Berant\textsuperscript{1} \textsuperscript{1}Tel Aviv University, \textsuperscript{2}Allen Institute for AI \{ori.yoran, ori.ram, joberant\}@cs.tau.ac.il \quad tomerw@allenai.org ABSTRACT Retrieval-augmented language models (RALMs) hold promise to produce language understanding systems that are factual, efficient, and up-to-date. An important desideratum of RALMs, is that retrieved information helps model performance when it is relevant, and does not harm performance when it is not. This is particularly important in multi-hop reasoning scenarios, where misuse of irrelevant evidence can lead to cascading errors. However, recent work has shown that retrieval augmentation can sometimes have a negative effect on performance. In this work, we present a thorough analysis on five open-domain question answering benchmarks, characterizing cases when retrieval reduces accuracy. We then propose two methods to mitigate this issue. First, a simple baseline that filters out retrieved passages that do not entail question-answer pairs according to a natural language inference (NLI) model. This is effective in preventing performance reduction, but at a cost of also discarding relevant passages. Thus, we propose a method for automatically generating data to fine-tune the language model to properly leverage retrieved passages, including for challenging multi-hop tasks, using a mix of relevant and irrelevant contexts at training time. We empirically show that even 1,000 examples suffice to train the model to be robust to irrelevant contexts while maintaining high performance on examples with relevant ones. 1 INTRODUCTION Large Language Models (LLMs) \citep{Brown2020LanguageMA, Chowdhery2022PalmA, Touvron2023LLaMA} are the foundation on top of which modern language systems are built. However, open-domain question answering (ODQA; \citep{Chen2017ReadingQA}) and other knowledge-intensive tasks \citep{Thorne2018OpenBookQA, Petroni2021GLUE} require vast amounts of up-to-date factual knowledge about rare entities that even very large models cannot memorize \citep{Roberts2020ScalingLA, Dhingra2022ScalingLA}. A dominant approach for combating this issue has been Retrieval Augmented Language Models (RALMs), which incorporate a retrieval mechanism to reduce the need for storing information in the LLM parameters \citep{Guu2020RAGT, Lewis2020RetrievalAugmentedLMs, Izacard2023RAG, Rubin2023RALM}. Furthermore, RALMs have also been shown to improve ODQA performance in an in-context setting (without any training), simply by prepending retrieved sentences to the input question \citep{Ram2023RALM}. Nevertheless, retrievers are not perfect and past work has shown that noisy retrieval can negatively affect LLM performance \citep{Petroni2020RAG, Li2023RAG}. For example, in Fig. 1, when posed with the questions “Who is playing Jason on General Hospital?” a vanilla LLM (left) correctly answers the question while the RALM (right) is “distracted” by irrelevant context about the actor portraying Cooper, not Jason. In this work, we analyze and improve the robustness of RALMs to noisy retrieved contexts. Our definition for retrieval-robust LLMs states that: (a) when relevant, the retrieved context should improve model performance; (b) when irrelevant, the retrieved context should not hurt model performance. To this end, we present two methods for retrieval-robustness in RALMs (\S 2). First, we consider a setting where we have black-box access to the LLM and cannot train it. Rather than solely relying on in-context prompting \citep{Brown2020LanguageMA}, we frame retrieval robustness as a natural language inference (NLI) problem \citep{Dagan2006SemEval, Bowman2015A}, Namely, given a question and retrieved context, an NLI model can predict whether a question-answer pair... (hypothesis) is entailed by the context (premise). Building on the strong performance of recent NLI models (e.g., in detecting model hallucinations [Honovich et al., 2022] and attributed question answering [Bohnet et al., 2023]), we use such models to identify irrelevant contexts. When the context is labeled as irrelevant to the question-answer pair, we generate the answer using the LLM without retrieval as a “back-off strategy”. Our results show that this natural baseline is highly effective at identifying irrelevant contexts, but is too strict and discards relevant ones as well (§4). We then propose a method for training RALMs to be retrieval-robust. Intuitively, LLMs are not trained with retrieved passages, and thus brittleness to noisy retrieval is somewhat expected. Therefore, we perform an additional finetuning step that teaches the LLM to be robust to noisy contexts. The core challenge is to generate data for finetuning, and we describe a procedure for automatically generating such data for both single-hop and multi-hop questions. In the single-hop setting, assuming access to gold QA pairs and a retriever, we create training examples using retrieved contexts, where we can use low-ranked or random passages as noisy contexts. In the multi-hop setting, training examples need to contain not only retrieved contexts, but also intermediate questions, answers and relevant contexts, which comprise the question decomposition (Fig. 3), shown to be necessary for high performance on multi-hop questions ([Wolfsen et al., 2020], [Press et al., 2023]). To generate decompositions to train on, we use a strong LLM, prompted for decomposition without any retrieval. Then, we can sample multiple decompositions, and use self-consistency ([Wang et al., 2023]) to identify high-quality training examples (§3.2.3). To test our methods, we evaluate retrieval robustness on five ODQA benchmarks, four of which contain multi-hop questions, where the retriever is called multiple times ([Jiang et al., 2023]). Fig. 2 shows that even with a strong retriever (top-1 Google search) incorporating the retrieved context actually hurts model performance on two of the benchmarks (STRATEGYQA and FERMI). Moreover, adding randomly-retrieved contexts dramatically decreases accuracy on all five datasets. Our analysis (§5) shows that irrelevant context causes a wide range of errors, which include copying irrelevant answers from the retrieved sentences and hallucinating incorrect answers and decompositions. Our results demonstrate that finetuning LLMs to be retrieval-robust enables them to ignore irrelevant context while improving their overall accuracy (§4). When using a strong retriever at test time, our finetuned models outperform both models that were finetuned without retrieval, as well as untrained models prompted using in-context learning. To test robustness to noisy context, we evaluate QA accuracy when models are given randomly-retrieved contexts. In this setting, our finetuned models perform on par with those that were finetuned without retrieval, demonstrating retrieval robustness. In addition, our ablation study shows that training models on a mixture of relevant and irrelevant contexts results in models that are much more robust to irrelevant context. To summarize, our main contributions are: • We conduct a thorough analysis on the robustness of RALMs to irrelevant retrieved contexts. • We show that small NLI models can be used to identify irrelevant context and improve robustness, without updating the model parameters. • We demonstrate that training LLMs when to use retrieval helps make models robust to irrelevant context and improve their overall performance, including in challenging multi-hop tasks. 1Our code, data, and models are available at https://github.com/oriyor/ret-robust Figure 2: Accuracy for Llama-2-13B few-shot prompted on five QA tasks, in three settings: (a) without retrieval, (b) with top-1 retrieval from a strong search engine, and (c) with a randomly-retrieved passage. Retrieval augmentation can boost performance, but even strong retrieval hurts performance on StrategyQA and Fermi, and random contexts reduce performance dramatically. 2 Making RALMs Robust to Irrelevant Contexts We now present our methods for building RALMs that are robust to irrelevant contexts. We begin by describing the common approach for incorporating evidence into RALMs. Next, we explore a natural baseline for using an NLI model to identify irrelevant contexts. Last, we describe our procedure for finetuning models to be robust to irrelevant context. In-context RALMs Language models define a probability distribution over sequences of tokens, with auto-regressive models assigning a probability via next-token prediction: \( p_{LM} = \Pi_{i=1}^{n} p_{\theta}(x_i | x_{<i}) \), where \( x_{<i} \) is the sequence of tokens preceding \( x_i \) at each step and \( \theta \) denotes the parameters of the LM. For RALMs, we follow the definition of in-context RALMs from Ram et al. [2023], where context sentences are retrieved from a corpus \( C \), and generation is conditioned on the retrieved context. Given the retrieval operation \( R_C \), this can be formalized as \( p_{RALM} = \Pi_{i=1}^{n} p_{\theta}(x_i | R_C(x_{<i}); x_{<i}) \), where \( [R_C(x_{<i}); x_{<i}] \) denotes the concatenation of the retrieved evidence with the generated sequence. Generation in LMs and RALMs can also be conditioned on additional input, which we omit for brevity. In our setting, we focus on RALMs for ODQA. We follow recent approaches such as Self-Ask and IR-CoT [Press et al., 2023; Trivedi et al., 2023; Yoran et al., 2023], for interleaving retrieval with multi-hop question answering (see Fig. 3). Retrieval is performed for every intermediate question and each context is prepended to the question. In the single-hop setting, the model has to generate the answer given a question and retrieved context. In the multi-hop setting, the model has to generate intermediate questions and answers until arriving at the final answer and the retriever is called for the original question and after each intermediate question. Formally, \( x \) in this case is the generated decomposition until an intermediate step and \( R_C(x) \) are the retrieved contexts for all questions in \( x \). 2.1 Identifying Irrelevant Contexts with NLI Models NLI models [Dagan et al., 2006; Bowman et al., 2015] classify whether a textual hypothesis is entailed, neutral, or contradicted given a textual premise. Recent work successfully used NLI models to automatically identify hallucinations [Honovich et al., 2022] and statement attribution [Bohnet et al., 2023] when presented with a context and generated text. Similarly, a natural baseline is to frame irrelevant context identification as an NLI problem, by using the retrieved context only when the hypothesis (i.e., final answer and intermediate question-answer pairs; Fig. 3) are classified as entailed by the premise (i.e., the retrieved context). We use a simple back-off strategy where we generate twice, once with \( p_{LM} \) and once with \( p_{RALM} \), and only use the RALM if the NLI model classified all generated answers (and intermediate questions) as entailed by the retrieved evidence. Figure 3: Interleaving decomposition and retrieval in Self-Ask format (Press et al., 2023). The model generates intermediate questions and answers until generating the final answer (model generations are shown in pink). Retrieved evidence for intermediate questions is prepended at each step. For example, in Fig. 1, the retrieved evidence “Jason Gerhardt... is an American actor... known for playing Cooper Barrett...” serves as the premise while the question and generated answer, “Q: Who is the actor playing Jason on general hospital? A: Steve Burton” are concatenated and serve as our hypothesis. As this context is irrelevant, we expect the NLI model to label the hypothesis as contradicting. Given a contradicting or neutral hypothesis, we will use the standard LLM without the (potentially distracting) retrieved context. For multi-hop questions (as in Fig. 3), we additionally verify that each intermediate-answer pair is entailed by the retrieved evidence using all retrieved evidence as our premise and the intermediate question-answer pair as the hypothesis. For example, “Q: Who is Colonel Walter Phelps? A: Colonel Walter Phelps was an officer in the Union Army throughout the American Civil War.” for the first intermediate question in Fig. 3. 2.2 Training Robust RALMs As in-context RALMs are not trained to use retrieved passages, a more effective solution than post-hoc filtering (using NLI) may be to train RALMs to ignore irrelevant contexts. We are interested in testing whether training on a relatively small dataset (several hundreds of examples) would suffice. Automatically Generating Training Data Our goal is to teach RALMs to be robust to irrelevant context in an ODQA setting. In the single-hop setting, generating training data is straightforward. Given access to a dataset of question-answer pairs \(\{(q, a)\}\) (i.e., without contexts) and a retriever \(R_C\), we use the retriever to augment questions with retrieved context. To create training examples with relevant contexts, we return the top-1 context from \(R_C\), and for irrelevant contexts, we either return a low-ranked result from \(R_C(q)\) or a random context (i.e., \(R_C(q')\) for another question \(q'\)). We denote the chosen context by \(r_q\). Then, the training dataset is defined by \(D = \{([r_q; q], a)\}\). Our main challenge is generating training examples for multi-hop questions. In these questions the model generates a decomposition, consisting of intermediate questions and answers, before arriving at the final answer, while the retriever is called multiple times (Fig. 3). Our goal is to automatically generate retrieval-augmented decomposition steps, \(D = \{([r_x; x], y)\}\), where: \(y\) is the correct generation for each step (i.e., the correct intermediate question, intermediate answer, or final answer); \(x\) consists of the previously generated steps up to \(y\); \(r_x\) is the retrieved contexts for all steps in \(x\). Our first step to automatically generate decompositions is to prompt a strong LLM without access to retrieval and to verify its answers. However, the LLM may arrive at the correct answer using an incorrect decomposition, for example in binary or comparison questions. Hence, we need to ensure the quality of generated decompositions. For multi-hop datasets which provide intermediate answers, we simply filter out generated decompositions that do not contain them. When intermediate answer annotations are unavailable, we sample from the LLM that generated the decomposition multiple times and verify self-consistency (Wang et al., 2023). Further details are given in §3.2.3. | Dataset | Type | Example | |------------|------------|-------------------------------------------------------------------------| | NQ | Single-hop | What episode of law and order svu is mike tyson in? | | 2WikiMQA | Explicit | Where was the place of death of Isabella Of Bourbon’s father? | | BAMBOOGLE | Explicit | What is the maximum airspeed (in km/h) of the third fastest bird? | | STRATEGYQA | Implicit | Can Arnold Schwarzenegger deadlift an adult Black rhinoceros? | | FERMI | Implicit | How many high fives has Lebron James given/received? | Table 1: The QA datasets in our experiments. **Training** We use our automatically generated data $D$ to fine-tune models for generating $y$ conditioned on $[r_x; x]$ with standard maximum likelihood. Since we are mostly interested in the low-data regime, we limit the number of questions in $D$ to 1,000 in the single-hop setting and 500 in the multi-hop setting (splitting multi-hop questions to multiple examples for each step), and use parameter efficient fine-tuning (Dettmers et al., 2023). Thus, training all our models takes no more than a few hours. Additional experimental details are in §4 and §A.1. ## 3 EXPERIMENTAL SETTING ### 3.1 DATASETS We experiment with both single- and multi-hop QA datasets. We list and give an example from each dataset in Tab.1. Our QA benchmarks can be categorized based on their required reasoning skills: - **Single-hop:** Information-seeking questions that do not require decomposition. We use the popular Natural Questions (NQ) dataset (Kwiatkowski et al., 2019). - **Explicit Reasoning:** Multi-hop questions where reasoning is explicitly expressed in the question. We include 2WikiMQA (Welbl et al., 2018) and BAMBOOGLE (Press et al., 2023). - **Implicit Reasoning:** Multi-hop questions where generating reasoning steps requires common-sense (implicit reasoning, Geva et al., 2021). Such questions may have multiple valid reasoning chains. We evaluate on STRATEGYQA (Geva et al., 2021) and FERMI (Kalyan et al., 2021). For evaluation, we follow prior work and use EM for NQ and STRATEGYQA, and $F_1$ for 2WikiMQA and BAMBOOGLE. For FERMI, we use the official order-of-magnitude evaluation (Kalyan et al., 2021). Following prior work (Khattab et al., 2022; Trivedi et al., 2023; Yoran et al., 2023), we evaluate on 500 random examples from the development set of each dataset. We provide additional technical details on evaluation in §A.2. ### 3.2 MODELS We next describe our retrievers (§3.2.1), prompted baselines (§3.2.2), and finetuned models (§3.2.3). #### 3.2.1 RETRIEVERS Our models use a retriever based on GOOGLE SEARCH, as well as the open-source COLBERTV2 (Khattab & Zaharia, 2020). Since the corpus for our datasets is Wikipedia, we format search queries as “en.wikipedia.org $q_i$” when accessing GOOGLE SEARCH. For COLBERTV2 our corpus is the 2018 Wikipedia from Karpukhin et al. (2020). To simulate different types of noise, we return either the top-1, a low-ranked relevant evidence for a random passage that is the top-1 evidence for a different question or intermediate question from the same dataset. --- 2We query Google search via the SerpAPI service: [https://serpapi.com/](https://serpapi.com/) 3For GOOGLE SEARCH, we use the lowest returned result from the API, which is at rank 9.3 on average. For COLBERTV2 we only experiment with top-1 results. 3.2.2 Few-shot Prompted Baselines Our main baselines are \textit{Llama-2-13B} models prompted for QA in the Self-Ask format through in-context learning (Brown et al., 2020) with 4-6 exemplars. We also evaluate with \textit{Llama-2-70B} on NQ. Our baselines differ based on the retrieved contexts in the exemplars (Full prompts in §A.5): - **Self-Ask No Retrieval (SA-NR):** Exemplars are gold decompositions without retrieved evidence. We use this prompt to evaluate the performance of models without retrieval, when relying solely on their parametric memory, i.e., the information encoded in the model’s parameters. As an additional baseline, we use this non-retrieval prompt, but still apply retrieval during inference. - **Self-Ask Retrieval@1 (SA-R@1):** Exemplars are gold decompositions prepended with the most relevant evidence retrieved from GOOGLE SEARCH for each step. - **Self-Ask Retrieval@10 (SA-R@10):** Exemplars are gold decompositions prepended with the lowest rank passage from Google (which is rank 10 in most cases). - **Self-Ask Random Retrieval (SA-RMix):** Exemplars are gold decompositions prepended with either the top-1 or lowest-ranked evidence from GOOGLE SEARCH, interchangeably. **NLI-based Models** We use a BART-Large model (Lewis et al., 2020a) with 407 million parameters trained on the MNLI dataset (Williams et al., 2018). We consider a question-answer pair as entailed if the probability for the entailment label is $\geq 0.5$. All few-shot prompted baselines have a variant with NLI, termed, SA-*-NLI. When there is no entailment, we use the generation from the SA-NR model, which uses only the parametric memory as the back-off strategy. 3.2.3 Fine-tuned Models We finetune \textit{Llama-2-13B} on 3 ODQA benchmarks, one single-hop (NQ, 1000 training examples), one explicit (2WIKIMQA, 500 questions, 1,539 examples), and one implicit (STRATEGYQA, 414 questions, 1,584 examples). Training hyperparameters are in §A.1. **Data Generation** We use a LLM to verify questions are answerable and to generate decompositions. This is done with GPT-3, \textit{code-davinci-002} (Brown et al., 2020; Chen et al., 2021) with 175B parameters. We prompt the model to generate decompositions using the SA-NR prompt. 2WIKIMQA contains intermediate answers, and we use those to verify generated decompositions. For the implicit STRATEGYQA we utilize only the final answer, and thus use self-consistency, as explained in §2. We sample 5 decompositions per question (one with greedy decoding and four with temperature 0.7) and only keep the greedily-decoded decomposition when all decompositions lead to the same correct answer. To verify the quality of the generated decompositions, we manually examine 50 decompositions per dataset and find that the generated decompositions are correct in about 90% of the time for STRATEGYQA and more than 95% for 2WIKIMQA. As FERMI and BAMBOOGLE contain less than 300 examples, we use them exclusively for evaluation and do not include them in these experiments. **Incorporating Retrieved Evidence in Training Examples** To make sure the model is exposed to relevant and irrelevant context, we use either the top-1, low-ranked, or random evidence with equal probability at each step. We term the trained model SA-RetRobust. We include ablations where training is without retrieved context (SA-NoRet) or only with the top-1 evidence (SA-Ret@1). 4 Results Fig. 4 presents our main results, evaluating the effect that retrieving top-1 result from GOOGLE SEARCH has on the following RALMs: (a) an In-Context RALM, prompted with the SA-RMix prompt (leftmost yellow), (b) the same model, but using NLI models to identify irrelevant context (center, green), and (c) our proposed SA-RetRobust, a RALM fine-tuned on a mixture of relevant --- 4We use the model from \url{https://huggingface.co/facebook/bart-large-mnli}. 5To not train our models to hallucinate, we also filter single-hop questions where \textit{code-davinci-002} fails to generate the correct answer. However, we do not fully guarantee that the gold answer appears in the retrieved context or encoded in parameters of the model being trained. Figure 4: Results for our models on all evaluation datasets when retrieving top-1 results from GOOGLE SEARCH. Bars show the difference in performance from a model with no retrieval (whose performance is given in parenthesis for each dataset). Prompting models to use retrieval in-context (leftmost bar) increases performance on single-hop and explicit datasets, but decreases performance on implicit ones (STRATEGYQA and FERMI). When using NLI models to identify irrelevant evidence (center bar), retrieval never hurts, at a cost to gains received when retrieval is helpful. Our trained RALMs (rightmost column) outperform all other models when applicable for NQ, 2WikiMQA, and STRATEGYQA (see §3.2.3 for more details on data generation). and irrelevant contexts (rightmost, orange). The bars show the difference in performance from our few-shot prompted model without retrieval (whose performance is shown in parenthesis for each dataset). For the In-Context RALM, we observe that retrieval helps on NQ, 2WikiMQA and BAMBOOGLE but reduces performance on the implicit STRATEGYQA and FERMI. Adding NLI to identify irrelevant context ensures that retrieval does not hurt, but gains are limited. Training with retrieval leads to gains across the board. We observe similar trends with the COLBERTV2 retriever, albeit at an overall decrease in accuracy (§A.3, Tab.5) Exploring the Robustness of Models to Irrelevant Context Fig. 5 present results when simulating retrieval of irrelevant/noisy context, either by retrieving low-ranked passages (top) or random ones (bottom). When retrieving random passages, the performance of the In-Context RALM drops by more than 10 points on average, a phenomenon that can be mitigated by using NLI models. SA-RetRobust performs best across all settings. To verify that these improvements indeed stem from robustness to irrelevant context rather than task-specific training, we compare SA-RetRobust to an ablated variant trained and evaluated without retrieval (full results in Tab.4, §A.3). SA-RetRobust is able to perform similarly to this model (within one standard deviation) when retrieving random contexts. Interestingly, when retrieving low-ranked results, SA-RetRobust outperforms the ablated model by 3.8 and 2.8 points on NQ and 2WikiMQA, while performing only slightly worse (within a 1.2 point difference) on STRATEGYQA. Overall, our results suggest SA-RetRobust learned to both better utilize retrieval and ignore irrelevant context. Adding Retrieval to In-context Exemplars can Hurt Performance Tab.2 and Tab.3 in §A.3 present full results with the GOOGLE SEARCH and COLBERTV2 retrievers. Interestingly, providing exemplars with retrieval performs worse than providing exemplars without retrieval, i.e., the SA-NR prompt leads to better performance even when retrieval is performed at inference time. This SA-NR prompt consistently outperforms the prompts with retrieval (SA-R@1, SA-R@10, and SA-RMix) when retrieving the top-1 result from COLBERTV2 or random contexts from GOOGLE SEARCH. In addition, the SA-R@1 model that contains top-1 results in the prompt is not the best performing even when retrieving top-1 results at inference time, losing to SA-NR by more than 2 points on average across datasets. When retrieving noisy contexts at inference time, SA-R@1 is outperformed by the other models, suggesting that showing examples for retrieval during in-context learning has a negative effect that causes over-utilization of irrelevant context. We observe a similar trend with Llama-2-70B in §A.3, Tab.6 Effect of NLI When retrieving random contexts or evaluating on the implicit STRATEGYQA and FERMI, NLI variants consistently perform best, suggesting small NLI models are sufficient to identify irrelevant evidence (Tab.2 and Tab.3 in §A.3). However, they reduce performance in cases Figure 5: Results with low-rank (top) and random retrieval (bottom). Models are similar to those in Fig.4. Performance significantly decreases for the prompted model in all settings, while it is maintained when using NLI models. Our finetuned SA-RetRobust is best performing in all settings. We show that SA-RetRobust learned to both ignore irrelevant context and better utilize relevant context by comparing to an ablated model without retrieval in §4. retrieval is helpful, e.g., on the explicit 2WikiMQA and Bamboogle. We perform a detailed analysis for our NLI variants in §5. Results with Finetuned Models Fig.4 and Fig.5 show SA-RetRobust consistently outperforms other models. In §A.3 Tab.4 we present all results for all trained models, showing SA-RetRobust outperforms our ablated baselines. Specifically, it outperforms SA-NoRet (fine-tuned without retrieval) by 2.7, 2.4, and 2.4 points on average when using the top-1, a low-ranked, or a random context from GOOGLE SEARCH during inference, and SA-@1 by 0.2, 0.4, 3.2 points respectively. When retrieving top-1 results from COLBERTV2, SA-RetRobust outperforms SA-NoRet and SA-@1 by 2.7 and 0.3 points on average, respectively. Our results suggest that training on a mixture of relevant and irrelevant contexts is necessary for robustness and improved performance. We provide a study on the generalization of our trained models to other settings in §A.3. Results with Llama-2-70B We compare SA-RetRobust with Llama-2-70B on the NQ dataset to assess whether larger models are more robust to irrelevant contexts. Without retrieval, the prompted Llama-2-70B outperforms the trained Llama-2-13B by 4.3 points (38.4 vs 34.1). However, when retrieving the top-1 results from GOOGLE SEARCH, SA-RetRobust outperforms all prompted Llama-2-70B variants by at least 3.3 points (45.7 vs 42.4), suggesting that increasing model size alone is not sufficient to make models better utilize retrieval. We provide the full results in §A.3 Tab.6. 5 ANALYSIS When Does Irrelevant Context Cause Errors? To assess errors caused by irrelevant context, we manually looked at examples from NQ, 2WikiMQA and StrategyQA, where models succeed without retrieval, but fail with it. Specifically, we look at examples where the model is prompted with the SA-RMix prompt that includes both top-1 and low-ranked retrieved result and is presented with low-rank or random retrieved evidence during inference. We manually annotated 40 examples in each setting (240 overall), and find that automatic errors indeed correlate with cases in which retrieval augmentation caused the model to err in 73% of the cases (65%–85% in each setting). We provide additional details and statistical tests in §A.4. We then take a deeper look at the errors. For NQ we find that when using low-ranked context, the wrong generated answer entity appears in the retrieved context in the majority (77%) of the cases, but only in 37% when retrieving random contexts. This suggests that irrelevant context can cause errors even when the generated entities are not retrieved, as shown in §A.4 Fig.6. For multi-hop questions, we test whether irrelevant context leads to errors in question decomposition, or in answering intermediate questions. We find that when retrieving low-ranked passages, most of the errors (68%) for the explicit 2WikiMQA are in intermediate answers, contrary to the implicit StrategyQA were errors are more prevalent in intermediate questions (77% of the cases, we provide an example in §4.4 Fig. 7). Similarly, when retrieving random contexts, most errors (60%) for 2WikiMQA are in intermediate questions. This suggests that irrelevant context can cause errors in generating both an answering strategy and the answer itself, depending on the task and the retrieved context. When Do NLI Models Fail? As shown in §4 NLI models are efficient at identifying relevant context, at a cost to gains when retrieval is helpful. To better characterize NLI models, we look at the accuracy for our SA*-NLI models as a function of the probability that the NLI model assigns to the entailment label. Tab. 8 in §4.4 shows that there are many cases where the probability for entailment is low but retrieval helps for NQ and 2WikiMQA. To better identify the source for such errors, we manually analysed 25 examples for each dataset where entailment was low, but retrieval augmentation led the SA-RMix model to generate the correct answer. In about half of the cases the NLI model erred and the generated text is indeed entailed from the retrieved contexts. In the remaining examples, for at least a third of the cases the generated answer or decomposition is correct, but the retrieved context does not directly entail the generation. This can be partially explained by the ability of models to combine retrieval and their parametric knowledge (Talmor et al., 2020; Zhong et al., 2023; Cohen et al., 2023). We are hopeful that these results can inspire future work to focus on additional aspects of retrieval augmentation, such as the effect augmentation has on generation probability rather than on direct entailment. 6 RELATED WORK Recent work has shown that the performance of LLMs can be affected by irrelevant context. Amongst others, Jia & Liang (2017), Petroni et al. (2020); Creswell et al. (2023) show that adding random or irrelevant context can decrease QA performance. This has been shown in many settings, including but not limited to factual reasoning (Kassner & Schütze, 2020; Pandia & Ettinger, 2021; Misra et al., 2023), text generation about new entities (Onoe et al., 2022), or even code generation (Jones & Stemnadt, 2022). In the context of arithmetic reasoning, Shi et al. (2023) showed that adding irrelevant context to exemplars or task specific instructions can help, suggesting the model may be equipped with such skills from pre-training. Other methods try to reduce the number of retrieval calls, by focusing on cases where confidence is low (Jiang et al., 2023) or retrieving information for rare entities (Mallen et al., 2023). Closest to our work is that of Li et al. (2023) that propose LLMs with a “controllable memory” that will enable them to ignore irrelevant context. However, their LLMs are finetuned on over 200K training examples, where our focus is on performance when training with 1,000 questions or less, and training data is automatically generated. In addition, we focus on a multi-hop QA setting, where the retriever is called multiple times (§2). A similar line of work focuses on when models should use parametric or retrieved knowledge, especially when there are conflicts (Longpre et al., 2021; Chen et al., 2022). It has been recently proposed to train models to generate from both parametric and retrieved knowledge (Neeman et al., 2023) or make better use of in-context exemplars (Zhou et al., 2023). 7 CONCLUSION In this work, we provide a thorough analysis showing current RALMs are not robust to irrelevant retrieved context, causing them to perform worse on certain tasks. In cases where training is not possible, a simple NLI baseline is efficient to increase robustness, at a cost of discarding relevant passages. When training is possible, we introduce an automatic data generation framework for single and challenging multi-hop tasks, and show training on as few as 1,000 examples with intentionally varied quality suffice to make models robust to irrelevant context and improve overall performance. While our focus in this work is on in-domain settings, we are hopeful our work could inspire future research towards a general RALM that is robust to irrelevant context. --- There are only 25 such examples for the NQ dataset. ACKNOWLEDGEMENTS We would like to our colleagues at TAU NLP for their insightful comments. We thank SerpAPI for their support by granting us an academic discount. This research was partially supported by the Yandex Initiative for Machine Learning and the European Research Council (ERC) under the European Union Horizons 2020 research and innovation programme (grant ERC DELPHI 802800). This work was completed in partial fulfillment of the Ph.D. of Ori Yoran. REFERENCES Bernd Bohnet, Vinh Q. Tran, Pat Verga, Roe Aharoni, Daniel Andor, Livio Baldini Soares, Massimiliano Ciaramita, Jacob Eisenstein, Kuzman Ganchev, Jonathan Herzig, Kai Hui, Tom Kwiatkowski, Ji Ma, Jianmo Ni, Liemni Sestorain Saralegui, Tal Schuster, William W. Cohen, Michael Collins, Dipanjan Das, Donald Metzler, Slav Petrov, and Kellie Webster. Attributed question answering: Evaluation and modeling for attributed large language models, 2023. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 632–642, Lisbon, Portugal, September 2015. Association for Computational Linguistics. doi: 10.18653/v1/D15-1075. URL https://aclanthology.org/D15-1075 Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcba967418bfb8ac142f64a-Abstract.html Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1870–1879. Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1171. URL https://aclanthology.org/P17-1171 Hung-Ting Chen, Michael Zhang, and Eunsol Choi. Rich knowledge sources bring complex knowledge conflicts: Recalibrating models to reflect conflicting evidence. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 2292–2307, Abu Dhabi, United Arab Emirates, 2022. Association for Computational Linguistics. URL https://aclanthology.org/2022.emnlp-main.146 Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code, 2021. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh,
99tKiMVJhY
What is the particular difficulty of solving the Dec-POMFC system, and how does the proposed method solve such difficulty? It would be easier to follow the paper if these questions were explicitly explained.
Learning Decentralized Partially Observable Mean Field Control for Artificial Collective Behavior Kai Cui, Sascha Hauck, Christian Fabian, Heinz Koepl Dept. of Electrical Engineering and Information Technology, Technische Universität Darmstadt {kai.cui, heinz.koepl}@tu-darmstadt.de Abstract Recent reinforcement learning (RL) methods have achieved success in various domains. However, multi-agent RL (MARL) remains a challenge in terms of decentralization, partial observability and scalability to many agents. Meanwhile, collective behavior requires resolution of the aforementioned challenges, and remains of importance to many state-of-the-art applications such as active matter physics, self-organizing systems, opinion dynamics, and biological or robotic swarms. Here, MARL via mean field control (MFC) offers a potential solution to scalability, but fails to consider decentralized and partially observable systems. In this paper, we enable decentralized behavior of agents under partial information by proposing novel models for decentralized partially observable MFC (Dec-POMFC), a broad class of problems with permutation-invariant agents allowing for reduction to tractable single-agent Markov decision processes (MDP) with single-agent RL solution. We provide rigorous theoretical results, including a dynamic programming principle, together with optimality guarantees for Dec-POMFC solutions applied to finite swarms of interest. Algorithmically, we propose Dec-POMFC-based policy gradient methods for MARL via centralized training and decentralized execution, together with policy gradient approximation guarantees. In addition, we improve upon state-of-the-art histogram-based MFC by kernel methods, which is of separate interest also for fully observable MFC. We evaluate numerically on representative collective behavior tasks such as adapted Kuramoto and Vicsek swarming models, being on par with state-of-the-art MARL. Overall, our framework takes a step towards RL-based engineering of artificial collective behavior via MFC. 1 Introduction Reinforcement learning (RL) and multi-agent RL (MARL) has found success in varied domains with few agents, including e.g. robotics (Polydoros & Nalpantidis, 2017), language models (Ouyang et al., 2022) or transportation (Haydari & Yılmaz, 2020). However, tractability issues remain for systems with many agents, especially under partial observability (Zhang et al., 2021b). Here, specialized approaches give tractable solutions, e.g. via factorizations (Qu et al., 2020; Zhang et al., 2021a). We propose a general, tractable approach for a broad range of decentralized, partially observable systems. Collective behavior & partial observability. Of practical interest is the design of simple local interaction rules to fulfill global, cooperative objectives by emergence of global behavior (Vicsek & Zafeiris, 2012). For example, intelligent self-organizing robotic swarms provide many applications such as farming, and general design frameworks remains elusive (Hrabia et al., 2018; Schranz et al., 2021). Other domains include group decision-making and opinion dynamics (Zha et al., 2020), biomolecular self-assembly (Yin et al., 2008), and active matter (Cichos et al., 2020; Kruk et al., 2020), e.g. nano-particles (Nasiri & Liebchen, 2022) or microswimmers (Narinder et al., 2018). Overall, there is a need for scalable MARL under decentralization and partial information. Scalable and partially observable MARL. Despite its many applications, decentralized cooperative control remains a difficult problem even in MARL (Zhang et al., 2021b), especially if coupled... Figure 1: A: Partially-observable Vicsek problem: agents must align headings (arrows), but observe only partial information (e.g., heading distribution in grey circle for orange agent). B: The decentralized model as a graphical model (grey: observed variables). C: In centralized training, we also observe the mean field, guiding the learning of upper-level actions $\pi$. D: The solved limiting MDP. with the simultaneous requirement of scalability. Recent scalable MARL methods include graphical decompositions (Qu et al., 2020; Zhang et al., 2021a) amongst others (Zhang et al., 2021b). However, most remain limited to full observability (Zhang et al., 2021a). One line of algorithms applies pairwise mean field (MF) approximations over neighbors (Yang et al., 2018), which has yielded decentralized, partially observable extensions (Subramanian et al., 2021; 2022). Relatedly, MARL based on mean field games (MFG, non-cooperative) and mean field control (MFC, cooperative) focus on a broad class of systems with many exchangeable agents. While the theory for MFG is developed (Huang et al., 2006; Şen & Caines, 2019; Saldi et al., 2019), to the best of our knowledge, neither MFC-based MARL algorithms nor discrete-time MFC have been proposed under partial information and decentralization, except in special linear-quadratic cases (Tottori & Kobayashi, 2022; Wang et al., 2021). Further, MFGs have been useful for analyzing emergence of collective behavior (Perrin et al., 2021; Carmona et al., 2022), but less for "engineering" collective behavior to achieve global objectives as in MFC, which is our focus. This is in contrast to rational, selfish agents, as a decomposition of global objectives into per-agent rewards is non-trivial (Waelchli et al., 2023; Kwon et al., 2023). Beyond scalability to many agents, general MFC for MARL is also not yet scalable to high-dimensional state-actions due to discretization of the simplex (Carmona et al., 2019b; Gu et al., 2021), except in linear-quadratic models (Fu et al., 2019; Carmona et al., 2019a). Instead, we consider general discrete-time MFC and scale to higher dimensions via kernels. We note that our model has a similar flavor to TD-POMDPs (Witwicki & Durfee, 2010), as the MF also abstracts influence from all other agents. However, TD-POMDP addresses different types of problems, as it considers local per-agent states, while the MF is both globally shared and influenced by all agents. Our contribution. A tractable framework for cooperative control, that can handle decentralized, partially observable systems, is missing. By the preceding motivation, we propose such a framework as illustrated in Figure 1. Our contributions may be summarized as (i) proposing the first discrete-time MFC model with decentralized and partially observing agents; (ii) providing accompanying approximation theorems, reformulations to a tractable single-agent Markov decision process (MDP), and novel optimality results over equi-Lipschitz policies; (iii) establishing a MARL algorithm with policy gradient guarantees; and (iv) presenting kernel-based MFC parametrizations of separate interest for general, higher-dimensional MFC. The algorithm is verified on classical collective swarming behavior models, and compared against standard MARL. Overall, our framework steps toward tractable RL-based engineering of artificial collective behavior for large-scale multi-agent systems. 2 DECENTRALIZED PARTIALLY OBSERVABLE MFC In this section, we introduce the motivating finite MFC-type decentralized partially observable control problem, as a special case of cooperative, general decentralized partially observable Markov decision processes (Dec-POMDPs (Bernstein et al., 2002; Oliehoek & Amato, 2016)). We then proceed to simplify in three steps of (i) taking the infinite-agent limit, (ii) relaxing partial observability during training, and (iii) correlating agent actions during training, in order to arrive at a tractable MDP with optimality guarantees, see also Figures 1 and 2. Proofs are found in Appendices D–S. In a nutshell, Dec-POMDPs are hard, and hence we reformulate into the Dec-POMFC, for which we develop a new theory for optimality of Dec-POMFC solutions in the finite Dec-POMDP. The solution of Dec-POMFC itself also remains hard, because its MDP is not just continuous, but infinite-dimensional for continuous state-actions. The MDP is later addressed in Section 3 by (i) kernel parametrizations and (ii) approximate policy gradients on the finite Dec-POMDP (Theorem 3). 2.1 MFC-TYPE COOPERATIVE MULTI-AGENT CONTROL To begin, we define the finite Dec-POMDP of interest, which is assumed to be MFC-type. In other words, (i) agents are permutation invariant, i.e. only the overall distribution of agent states matters, and (ii) agents observe only part of the system. We assume agents \( i \in [N] := \{1, \ldots, N\} \) endowed with random states \( x^i_t \), observations \( y^i_t \) and actions \( u^i_t \) at times \( t \in T := \mathbb{N} \) from compact metric state, observation and action spaces \( X, Y, U \) (finite or continuous). Agents depend on other agents only via the empirical mean field \( \mu^N_t := \frac{1}{N} \sum_{i \in [N]} \delta_{x^i_t} \). Policies are memory-less and shared by all agents, archetypal of collective behavior under simple rules (Hamann, 2018), and of interest to compute-constrained agents, including e.g. nano-particles or small robots. Optionally, memory and history-dependence can be integrated into the state, see Appendix E. Agents act according to policy \( \pi \in \Pi \) from a class \( \Pi \subseteq P(U)^{\mathbb{Y} \times T} \) of policies, with spaces of probability measures \( P(\cdot) \), equipped with the 1-Wasserstein metric \( W_1 \) (Villani, 2009). Starting with initial distribution \( \mu_0 \), \( x^0_0 \sim \mu_0 \), the MFC-type Dec-POMDP dynamics are \[ y^i_t \sim P^y(y^i_t | x^i_t, \mu^N_t), \quad u^i_t \sim \pi_t(u^i_t | y^i_t), \quad x^i_{t+1} \sim P(x^i_{t+1} | x^i_t, u^i_t, \mu^N_t) \] for all \( (i, t) \in [N] \times T \), with transition kernels \( P : X \times U \times P(X) \rightarrow P(X) \), \( P^y : X \times P(X) \rightarrow P(Y) \), objective \( J^N(\pi) = \mathbb{E}[\sum_{t \in T} \gamma^t r(\mu^N_t)] \) to maximize over \( \pi \in \Pi \) under reward function \( r : P(X) \rightarrow \mathbb{R} \), and discount factor \( \gamma \in (0, 1) \). Results generalize to finite horizons, average per-agent rewards \( r_{per} : X \rightarrow \mathbb{R}, r(\mu^N_t) = \int r_{per} d\mu^N_t \), and joint state-observation-action MFs via enlarged state space. Since general Dec-POMDPs are hard (Bernstein et al., 2002), our model establishes a tractable special case of high generality. Standard MFC already covers a broad range of applications, e.g. see surveys for finance (Carmona, 2020) and engineering (Djehiche et al., 2017) applications, which can now be handled under partial information. In addition, many classical, inherently partially observable models are covered by MFC-type Dec-POMDPs, such as the Kuramoto or Vicsek models in Section 4, where many-agent convergence is known as propagation of chaos (Chaintron & Diez, 2022). 2.2 LIMITING MFC SYSTEM In order to achieve tractability for large multi-agent systems, the first step is to take the infinite-agent limit. By a law of large numbers (LLN), this allows us to describe large systems only by the MF \( \mu_t \). Consider a representative agent as in (1) with states \( x_0 \sim \mu_0, x_{t+1} \sim P(x_{t+1} | x_t, u_t, \mu_t) \), observations \( y_t \sim P^y(y_t | x_t, \mu_t) \) and actions \( u_t \sim \pi_t(u_t | y_t) \). Then, its state probability law replaces the empirical state distribution, informally \( \mu_t = L(x_t) \equiv \lim_{N \to \infty} \mu^N_t \). Looking only at the MF, we hence obtain the decentralized partially observable MFC (Dec-POMFC) system \[ \mu_{t+1} = L(x_{t+1}) = T(\mu_t, \pi_t) := \int \int \int P(x, u, \mu_t) \pi_t(du | y) P^y(dy | x, \mu_t) \mu_t(dx) \] by deterministic transitions \( T : P(X) \times P(U)^Y \rightarrow P(X) \) and objective \( J(\pi) = \sum_{t=0}^{\infty} \gamma^t r(\mu_t) \). Approximation guarantees. Under mild continuity assumptions, the Dec-POMFC model in (2) constitutes a good approximation of large-scale MFC-type Dec-POMDP in (1) with many agents. Assumption 1a. The transitions $P$, $P^y$ and rewards $r$ are Lipschitz with constants $L_P$, $L_{P^y}$, $L_r$. Assumption 1b. The class of policies $\Pi$ is the set of all $L_\Pi$-Lipschitz policies for some $L_\Pi > 0$, i.e. for all $t \in T$ and $\pi \in \Pi$, we have that $\pi_t : Y \to P(U)$ is $L_\Pi$-Lipschitz. Alternatively, we may assume unrestricted policies if (i) observations only depend on an agent’s state, and (ii) $|X| < \infty$. Lipschitz continuity of the model is commonly assumed (Huang et al., 2006; Gu et al., 2021; Mondal et al., 2022), and in general at least (uniform) continuity is required: Consider a counterexample with uniform initial $\mu_0$ over states $A, B$. If dynamics, observations, or rewards jump between regimes at $\mu(A) = \mu(B) = 0.5$, the finite system will randomly experience all regimes, while limiting MFC experiences only the regime at $\mu(A) = \mu(B) = 0.5$. Meanwhile, Lipschitz policies are not only standard in MFC literature (Pasztor et al., 2021; Mondal et al., 2022) by neural networks (NNs) (Araujo et al., 2023), but also fulfilled for finite $Y$ trivially without loss of generality ($L_\Pi := \text{diam}(U)$), and for continuous $Y$ by kernel parametrizations in Section 3. We extend MFC approximation theorems (Gu et al., 2021; Mondal et al., 2022; Cui et al., 2023) to partial observations and compact spaces. Theorem 1. Fix an equicontinuous family of functions $F \subseteq \mathbb{R}^{P(X)}$. Under Assumptions 1a–1b, the MF converges in the sense of $\sup_{\pi \in \Pi} \sup_{f \in F} \mathbb{E} \left[ |f(\mu^N_t) - f(\mu_t)| \right] \to 0$ at all times $t \in T$. The approximation rate is $O(1/\sqrt{N})$ for finite state-actions, using equi-Lipschitz $F$ (Appendix D). Hence, the easier Dec-POMFC simplifies otherwise hard Dec-POMDPs. Indeed, we later show that such optimal Lipschitz Dec-POMFC policies are guaranteed to exist via closedness of joint-measures under equi-Lipschitz kernels (Appendix K), see Propositions 1, 2 and Theorem 2 later. Corollary 1. Under Assumptions 1a–1b, any optimal Dec-POMFC policy $\pi \in \arg \max_{\pi' \in \Pi} J(\pi')$ is $\varepsilon$-optimal in the MFC-type Dec-POMDP, $J^N(\pi) \geq \sup_{\pi' \in \Pi} J^N(\pi') - \varepsilon$, with $\varepsilon \to 0$ as $N \to \infty$. 2.3 Rewriting policies with mean field observations Now introducing the next system for reduction to an MDP, writing $\bar{\mu}, \bar{\pi}$ etc., let policies depend also on $\mu_t$, i.e. policies “observe” the mean field. While we could reason that agents might observe the MF or use filtering to estimate it (Åström, 1965), more importantly, the limiting MF is deterministic. Therefore, w.l.o.g. we obtain the decentralized mean field observable MFC (Dec-MFC) dynamics $$\bar{\mu}_{t+1} = T(\bar{\mu}_t, \bar{\pi}_t(\bar{\mu}_t)) := \int \int \int P(x, u, \mu_t) \bar{\pi}_t(du \mid y, \bar{\mu}_t) P^y(dy \mid x, \bar{\mu}_t) \bar{\mu}_t(dx),$$ with shorthand $\bar{\pi}_t(\bar{\mu}_t) = \bar{\pi}_t(\cdot \mid \cdot, \bar{\mu}_t)$, initial $\bar{\mu}_0 = \mu_0$ and according objective $\bar{J}(\bar{\pi}) = \sum_{t=0}^\infty \gamma^t r(\bar{\mu}_t)$ to optimize over (now MF-dependent) policies $\bar{\pi} \in \bar{\Pi} \subseteq P(U)^Y \times P(X) \times T$. Deterministic open-loop control transforms optimal Dec-MFC policies $\bar{\pi} \in \arg \max_{\bar{\pi}' \in \bar{\Pi}} \bar{J}(\bar{\pi}')$ into optimal Dec-POMFC policies $\pi \in \arg \max_{\pi \in \Pi} J(\pi)$ with decentralized execution, and vice versa: For given $\bar{\pi}$, compute deterministic MFs $(\bar{\mu}_0, \bar{\mu}_1, \ldots)$ via (3) and let $\pi = \Phi(\bar{\pi})$ by $\pi_t(du \mid y) = \bar{\pi}(du \mid y, \bar{\mu}_t)$. Analogously, represent $\pi \in \Pi$ by $\bar{\pi} \in \bar{\Pi}$ with constant $\bar{\pi}_t(\nu) = \pi_t$ for all $\nu$. Proposition 1. For any $\bar{\pi} \in \bar{\Pi}$, define $(\bar{\mu}_0, \bar{\mu}_1, \ldots)$ as in (3). Then, for $\pi = \Phi(\bar{\pi}) \in \Pi$, we have $\bar{J}(\bar{\pi}) = J(\pi)$. Inversely, for any $\pi \in \Pi$, let $\bar{\pi}_t(\bar{\nu}) = \pi_t$ for all $\bar{\nu}$, then again $\bar{J}(\bar{\pi}) = J(\pi)$. Corollary 2. Optimal Dec-MFC policies $\bar{\pi} \in \arg \max_{\bar{\pi}' \in \bar{\Pi}} \bar{J}(\bar{\pi}')$ yield optimal Dec-POMFC policies $\Phi(\bar{\pi})$, i.e. $J(\Phi(\bar{\pi})) = \sup_{\pi' \in \Pi} J(\pi')$. Knowing initial $\mu_0$ is often realistic, as deployment is commonly for well-defined problems of interest. Even then, knowing $\mu_0$ is not strictly necessary (Section 4). In contrast to standard deterministic open-loop control, (i) agents have stochastic dynamics and observations, and (ii) agents randomize actions instead of playing a trajectory, still leading to quasi-deterministic MFs by the LLN. 2.4 Reduction to Dec-MFC MDP Lastly, we reformulate as an MDP with more tractable theory and algorithms, writing $\hat{\mu}, \hat{\pi}$ etc. The recent MFC MDP (Pham & Wei, 2018; Carmona et al., 2019b; Gu et al., 2019) reformulates fully observable MFC as MDPs with higher-dimensional state-actions. Similarly, we reduce Dec-MFC to an MDP with joint state-observation-action distributions as its MDP actions. The Dec-MFC MDP has states $\hat{\mu}_t \in P(X)$ and actions $h_t \in H(\hat{\mu}_t) \subseteq P(X \times Y \times U)$ in the set of joint $h_t = \hat{\mu}_t \otimes P^y(\hat{\mu}_t) \otimes \hat{\pi}_t$. under any $L_\Pi$-Lipschitz policy $\hat{\pi}_t \in \mathcal{P}(\mathcal{U})^\mathcal{Y}$. Here, $\nu \otimes K$ is the product measure of measure $\nu$ and kernel $K$, and $\nu K$ is the measure $\nu K = \int K(\cdot | x) \nu(dx)$. For $\hat{\pi}_t \in \mathcal{P}(\mathcal{U})^\mathcal{Y}$, $\mu_{xy} \in \mathcal{P}(\mathcal{X} \times \mathcal{Y})$, we write $\mu_{xy} \otimes \hat{\pi}_t$ by letting $\hat{\pi}_t$ constant on $\mathcal{X}$. In other words, the desired joint $h_t$ results from all agents replacing the previous system’s policy $\hat{\pi}_t$ by lower-level policy $\tilde{\pi}_t$, which may be reobtained from $h_t$ (Appendix K, disintegration (Kallenberg, 2021)). Equivalently, identify $\mathcal{H}(\mu)$ with $\mu$ and classes of $\hat{\pi}_t$ yielding the same joint, and in practice we parametrize $\hat{\pi}_t$. Thus, we obtain the MDP dynamics $$h_t \sim \hat{\pi}(\hat{\mu}_t), \quad \hat{\mu}_{t+1} = \hat{T}(\hat{\mu}_t, h_t) := \iiint P(x, u, \hat{\mu}_t) h_t(dx, dy, du)$$ for Dec-MFC MDP policy $\hat{\pi} \in \hat{\Pi}$ and objective $\hat{J}(\hat{\pi}) = \mathbb{E}[\sum_{t=0}^{\infty} \gamma^t r(\hat{\mu}_t)]$. The Dec-MFC MDP policy $\hat{\pi}$ is "upper-level", as we sample $h_t$ from $\hat{\pi}$, to apply the lower-level policy $\tilde{\pi}_t[h_t]$ to all agents. **Guidance by mean field dependence.** Intuitively, the MF guides policy search in potentially hard, decentralized problems, and reduces to a single-agent MDP where we make some existing theory compatible. First, we formulate a dynamic programming principle (DPP), i.e. exact solutions by Bellman’s equation for the value function $V(\mu) = \sup_{\hat{\pi} \in \mathcal{H}(\mu)} r(\mu) + \gamma V(\hat{T}(\mu, h))$ (Hernández-Lerma & Lasserre, 2012). Here, a central theoretical novelty is closedness of joint measures under equi-Lipschitz policies (Appendix K). Concomitantly, we obtain optimality of stationary deterministic $\hat{\pi}$. For technical reasons, only here we assume Hilbertian $\mathcal{Y}$ (e.g. finite or Euclidean) and finite $\mathcal{U}$. **Assumption 2.** The observations $\mathcal{Y}$ are a metric subspace of a Hilbert space. Actions $\mathcal{U}$ are finite. **Theorem 2.** Under Assumptions 1a–1b and 2, there exists an optimal stationary, deterministic policy $\hat{\pi}$ for the Dec-MFC MDP, with $\hat{\pi}(\mu) \in \arg\max_{h \in \mathcal{H}(\mu)} r(\mu) + \gamma V(\hat{T}(\mu, h))$. **Decentralized execution.** Importantly, guidance by MF is only for training and not execution. An optimal upper-level policy $\hat{\pi} \in \arg\max_{\hat{\pi}' \in \hat{\Pi}} \hat{J}(\hat{\pi})$ is optimal also for the initial system, if it is deterministic, and an optimal one exists by Theorem 2. The lower-level policies $\tilde{\pi}_t \equiv \hat{\pi}_t$ are obtained by inserting the sequence of MFs $\hat{\mu}_0, \hat{\mu}_1, \ldots$ into $\hat{\pi}$, and remain non-stationary stochastic policies. **Proposition 2.** For deterministic $\hat{\pi} \in \hat{\Pi}$, let $\hat{\mu}_t$ as in (4) and $\bar{\pi} = \Psi(\hat{\pi})$ by $\bar{\pi}_t(v) = \hat{\pi}_t$ for all $v$, then $\hat{J}(\hat{\pi}) = \hat{J}(\bar{\pi})$. Inversely, for $\bar{\pi} \in \bar{\Pi}$, let $\hat{\pi}_t(v) = \nu \otimes P_y(v) \otimes \bar{\pi}_t(v)$ for all $v$, then $\hat{J}(\hat{\pi}) = \hat{J}(\bar{\pi})$. Note that the determinism of the upper-level policy is strictly necessary: A simple counterexample is a problem where agents should choose to aggregate to one state. If the upper-level policy randomly chooses between moving all agents to either $A$ or $B$, then a corresponding random agent policy splits agents and fails to aggregate. At the same time, randomization of agent actions remains necessary for optimality, as the problem of equally spreading would require uniformly random agent actions. **Complexity.** Tractability of multi-agent control heavily depends on information structure (Mahajan et al., 2012). General Dec-POMDPs have doubly-exponential complexity (NEXP, Bernstein et al. (2002)) and are harder than fully observable control (PSPACE, Papadimitriou & Tsitsiklis (1987)). In contrast, Dec-POMFC surprisingly imposes little additional complexity over standard MFC, as the MFC MDP remains deterministic in the absence of common noise correlating agents (Carmona et al., 2016). An analysis with common noise is possible, e.g. if observing the mean field, but out of scope. ### 3 Dec-POMFC Policy Gradient Methods All that remains is to solve Dec-MFC MDPs. As we obtain continuous Dec-MFC MDP states and actions even for finite $\mathcal{X}, \mathcal{Y}, \mathcal{U}$, and infinite-dimensional ones for continuous $\mathcal{X}, \mathcal{Y}, \mathcal{U}$, a value-based approach can be hard. Our policy gradient (PG) approach allows finding simple policies for collective behavior, with emergence of global intelligent behavior described by rewards $r$, under arbitrary (Lipschitz) policies. For generality, we use NN upper-level and kernel lower-level policies. While lower-level (Lipschitz, Araujo et al., 2023)) NNs policies could be considered akin to hypernetworks (Ha et al., 2016), the resulting distributions over NN parameters as MDP actions are too high-dimensional and failed in our experiments. We directly solve finite-agent MFC-type Dec-POMDPs by solving the Dec-MFC MDP in the background. Indeed, the theoretical optimality of Dec-MFC MDP solutions is guaranteed over Lipschitz policies in $\Pi$. **Corollary 3.** Under Assumptions 1a–1b, a deterministic Dec-MFC solution $\hat{\pi} \in \arg\max_{\hat{\pi}'} \hat{J}(\hat{\pi}')$ is $\epsilon$-optimal in the Dec-POMDP, $J^N(\Phi(\Psi(\hat{\pi}))) \geq \sup_{\pi' \in \Pi} J^N(\pi') - \epsilon$, with $\epsilon \to 0$ as $N \to \infty$. Histogram vs. kernel parametrizations. Except for linear-quadratic algorithms (Wang et al., 2021; Fu et al., 2019; Carmona et al., 2019a), the only approach to learning MFC in continuous spaces \( \mathcal{X} \subseteq \mathbb{R}^n, n \in \mathbb{N} \) (and here \( \mathcal{Y} \)) is by partitioning and "discretizing" (Carmona et al., 2019b; Gu et al., 2021). Unfortunately, partitions fail Lipschitzness and hence approximation guarantees, even in standard MFC. Instead, we use kernel representations for MFs \( \mu_t^N \) and lower-level policies \( \pi_t \). We represent \( P(\mathcal{X}) \)-valued MDP states \( \mu_t^N \) not by counting agents in each bin, but instead mollify around each center \( x_b \in \mathcal{X} \) of \( M_{\mathcal{X}} \) bins \( b \in [M_{\mathcal{X}}] \) using kernels. The result is Lipschitz and approximates histograms arbitrarily well (Miculescu, 2000, Theorem 1). Hence, we obtain input logits \( I_b = \int \kappa(x_b, \cdot) d\mu_t^N = \frac{1}{N} \sum_{i \in [N]} \kappa(x_b, x_i^t) \) for some kernel \( \kappa : \mathcal{X} \times \mathcal{X} \to \mathbb{R} \) and \( b \in [M_{\mathcal{X}}] \). Output logits constitute mean and log-standard deviation of a diagonal Gaussian over parameter representations \( \xi \in \Xi \) of \( \pi_t \). We obtain Lipschitz \( \tilde{\pi}_t \) by representing \( \pi_t \) via \( M_{\mathcal{Y}} \) points \( y_b \in \mathcal{Y} \) such that \( \tilde{\pi}_t(u | y) = \sum_{b \in [M_{\mathcal{Y}}]} \kappa(y_b, y)p_b(u)/\sum_{b \in [M_{\mathcal{Y}}]} \kappa(y_b, y) \). Here, we consider \( L_\lambda \)-Lipschitz maps \( \lambda_b \) from parameters \( \xi \in \Xi \) to distributions \( p_b = \lambda_b(\xi) \in P(U) \) with compact parameter space \( \Xi \), and for kernels choose RBF kernels \( \kappa(x, y) = \exp(-\|x - y\|^2/(2\sigma^2)) \) with some bandwidth \( \sigma^2 > 0 \). **Proposition 3.** Under RBF kernels \( \kappa \), for any \( \xi \) and Euclidean \( \mathcal{Y} \), lower-level policies \( \Lambda(\xi)(\cdot | y) := \sum_{b \in [M_{\mathcal{Y}}]} \kappa(y_b, y)\lambda_b(\xi)/\sum_{b \in [M_{\mathcal{Y}}]} \kappa(y_b, y) \) are \( L_{\Pi} \)-Lipschitz in \( y \) as in Assumption 1b, whenever \( \sigma^2 \exp^2(-\frac{1}{2\sigma^2} \text{diam}(\mathcal{Y})^2) \geq \frac{1}{L_{\Pi}} \text{diam}(\mathcal{Y}) \text{diam}(U) \max_{y \in \mathcal{Y}} \|y\| \), and such \( \sigma^2 > 0 \) always exists. Proposition 3 ensures Assumption 1b if needed. To achieve optimality by Corollary 3, deterministic policies commonly result from convergence of stochastic PGs, taking mean actions, or are guaranteed by deterministic PGs (Silver et al., 2014; Lillicrap et al., 2016). Beyond allowing for (i) Lipschitz guarantees, and (ii) finer control over agent actions, another advantage of kernels is (iii) the improved complexity over histograms. Even a histogram with only 2 bins per dimension requires \( 2^d \) bins in \( d \)-dimensional spaces, while kernel representations may place e.g. 2 points per dimension, improving upon the otherwise necessarily exponential complexity, see also Appendix A for empirical support. Direct multi-agent reinforcement learning algorithm. Applying RL directly to the Dec-MFC MDP would be satisfactory only under known MFC models. Importantly, (i) we do not always have access to the model, and (ii) even if we do, parametrizing MFs in arbitrary compact \( \mathcal{X} \) is hard. Instead, it is more practical and tractable to train on a finite system. Our direct MARL approach hence trains on a finite \( N \)-agent MFC-type Dec-POMDP of interest, in a model-free manner. In order to exploit the underlying MDP, our algorithm assumes during training that (i) the MF is observed, and (ii) agents can correlate actions (e.g. centrally, or sharing seeds). Therefore, the finite system (1) is adjusted for training by correlating agent actions on a single centrally sampled lower-level policy \( \tilde{\pi}_t \). Now write \( \tilde{\pi}_t(\xi_t | \tilde{\mu}_t^N) \) as density over parameters \( \xi_t \in \Xi \) under a base measure (discrete, Lebesgue). Substituting \( \xi_t \) as actions parametrizing \( h_t \) in the MDP (4), e.g. by using RBF kernels, yields the centralized training system as seen in Figure 1 for stationary policy \( \tilde{\pi}_\theta \) parametrized by \( \theta \), \[ \tilde{\pi}_t = \Lambda(\tilde{\xi}_t), \quad \tilde{\xi}_t \sim \tilde{\pi}_\theta(\tilde{\mu}_t^N), \] \[ \tilde{y}_t^i \sim P^y(\tilde{y}_t^i | \tilde{x}_t^i, \tilde{\mu}_t^N), \quad \tilde{u}_t^i \sim \tilde{\pi}_t(\tilde{u}_t^i | \tilde{y}_t^i), \quad \tilde{x}_{t+1}^i \sim P(\tilde{x}_{t+1}^i | \tilde{x}_t^i, \tilde{u}_t^i, \tilde{\mu}_t^N), \quad \forall i \in [N]. \] Policy gradient approximation. Since we train on a finite system, it is not immediately clear whether centralized training really yields the PG for the underlying Dec-MFC MDP, also in existing literature for learning MFC. We will show this practically relevant fact up to an approximation. The general PG for stationary \( \tilde{\pi}_\theta \) (Sutton et al., 1999; Peters & Schaal, 2008) is \[ \nabla_\theta J(\tilde{\pi}_\theta) = (1 - \gamma)^{-1} \mathbb{E}_{\mu \sim d_\theta, \xi \sim \tilde{\pi}_\theta(\mu)} [Q^\theta(\mu, \xi) \nabla_\theta \log \tilde{\pi}_\theta(\xi | \mu)] \] with \( Q^\theta(\mu, \xi) = \mathbb{E}[\sum_{t=0}^\infty \gamma^t r(\tilde{\mu}_t) | \tilde{\mu}_0 = \mu, \xi_0 = \xi] \) under parametrized actions \( \xi_t \) in (4), and using sums \( d_\theta = (1 - \gamma) \sum_{t \in T} \gamma^t L_{\tilde{\pi}_\theta}(\tilde{\mu}_t) \) of laws of \( \tilde{\mu}_t \) under \( \tilde{\pi}_\theta \). Our approximation motivates MFC for MARL by showing that the underlying background Dec-MFC MDP is approximately solved under Lipschitz parametrizations, e.g. we normalize parameters \( \xi \) to finite action probabilities, or use bounded diagonal Gaussian parameters. **Assumption 3.** The policy \( \tilde{\pi}_\theta(\xi | \mu) \) and its log-gradient \( \nabla_\theta \log \tilde{\pi}_\theta(\xi | \mu) \) are \( L_{\Pi} \)-\( L_{\nabla \Pi} \)-Lipschitz in \( \mu \) and \( \xi \) (or alternatively in \( \mu \) for any \( \xi \), and uniformly bounded). The parameter-to-distribution map is \( \Lambda(\xi)(\cdot | y) := \sum_b \kappa(y_b, y)\lambda_b(\xi)(\cdot)/\sum_b \kappa(y_b, y) \), with kernels \( \kappa \) and \( L_\lambda \)-Lipschitz \( \lambda_b : \Xi \to P(U) \). --- 1Existing Q-Learning with kernel regression (Gu et al., 2021) is for finite states \( \mathcal{X} \) with kernels on \( P(\mathcal{X}) \), and learns on the MFC MDP. We allow continuous \( \mathcal{Y} \) by kernels on \( \mathcal{Y} \) itself, and learn on the finite-agent system. Algorithm 1 Dec-POMFPPPO (during centralized training) 1: for iteration $n = 1, 2, \ldots$ do 2: for time $t = 0, \ldots, B_{\text{len}} - 1$ do 3: Sample central Dec-MFC MDP action $\hat{\pi}_t = \Lambda(\xi_t)$, $\xi_t \sim \hat{\pi}^\theta(\hat{\mu}_t^N)$. 4: for agent $i = 1, \ldots, N$ do 5: Sample per-agent action $\hat{u}_t^i \sim \hat{\pi}_t(\hat{u}_t^i | \hat{y}_t^i)$ for observation $\hat{y}_t^i$. 6: Perform actions, observe reward $r(\hat{\mu}_t^N)$, next MF $\hat{\mu}_{t+1}^N$, termination flag $d_{t+1} \in \{0, 1\}$. 7: end for updates $i = 1, \ldots, N_{\text{PPO}}$ do 8: Sample mini-batch $b$, $|b| = b_{\text{len}}$ from data $B := ((\hat{\mu}_t^N, \xi_t, r_t^N, d_{t+1}, \hat{\mu}_{t+1}^N))_{t \geq 0}$. 9: Update policy $\hat{\pi}^\theta$ via PPO loss $\nabla_\theta L_\theta$ on $b$, using GAE (Schulman et al., 2016). 10: Update critic $V^\theta$ via critic $L_2$-loss $\nabla_\theta L_\theta$ on $b$. Theorem 3. Centralized training on system (5) approximates the true gradient of the underlying Dec-MFC MDP, i.e. under RBF kernels $\kappa$ as in Proposition 3, Assumptions 1a–1b and 3, as $N \to \infty$, $$\left\| (1 - \gamma)^{-1} \mathbb{E}_{\mu \sim d_{\hat{\pi}^\theta}, \xi \sim \hat{\pi}^\theta(\mu)} \left[ \tilde{Q}^\theta(\mu, \xi) \nabla_\theta \log \hat{\pi}^\theta(\xi | \mu) \right] - \nabla_\theta J(\hat{\pi}^\theta) \right\| \to 0$$ with $d_{\hat{\pi}^\theta} = (1 - \gamma) \sum_{t \in T} \gamma^t L_{\hat{\pi}^\theta}(\hat{\mu}_t^N)$ and $\tilde{Q}^\theta(\mu, \xi) = \mathbb{E} \left[ \sum_{t=0}^\infty \gamma^t r(\hat{\mu}_t^N) \mid \mu_0 = \mu, \xi_0 = \xi \right]$. The value function $\tilde{Q}^\theta$ in the finite system is then substituted in actor-critic manner by on-policy and critic estimates. The Lipschitz conditions of $\hat{\pi}^\theta$ in Assumption 3 are fulfilled by Lipschitz NNs (Pasztor et al., 2021; Mondal et al., 2022; Araujo et al., 2023) and our parametrizations. The approximation is novel, building a foundation for MARL via MFC directly on a finite MARL problem. Our results also apply to fully observable MFC by $y_t = x_t$. Though gradient estimates allow convergence guarantees in finite MDPs (e.g. Qu et al. (2020, Theorem 5)), Dec-MFC MDP state-actions are always non-finite. In practice, we use empirically more efficient proximal policy optimization (PPO, Schulman et al. (2017); Yu et al. (2022)) to obtain the decentralized partially observable mean field PPO algorithm (Dec-POMFPPPO, Algorithm 1). By Theorem 3, we may learn directly on the MFC-type Dec-POMDP system (1). During training, the algorithm (i) assumes to observe the MF, and (ii) samples only one centralized $h_t$. Knowledge of the MF during training aligns our framework with the popular centralized training, decentralized execution (CTDE) paradigm. During execution, decentralized policies suffice for near-optimality by Corollary 3 without agents knowing the MF or coordinating centrally. Decentralized training can also be achieved, if the MF is observable and all agents use the same seed to correlate their actions. 4 EVALUATION In this section, we empirically evaluate our algorithm, comparing against independent and multi-agent PPO (IPPO, MAPPO) with state-of-the-art performance (Yu et al., 2022; Papoudakis et al., 2021). For comparison, we share hyperparameters and architectures between algorithms, see Appendices A–C. Problems. In the Aggregation problem we consider a typical continuous single integrator model, commonly used in the study of swarm robotics (Soysal & Sahin, 2005; Bahgeci & Sahin, 2005). Agents observe their own position noisily and should aggregate. The classical Kuramoto model is used to study synchronization of coupled oscillators, finding application not only in physics, including quantum computation and laser arrays (Acebrón et al., 2005), but also in diverse biological systems, such as neuroscience and pattern formation in self-organizing systems (Breakspear et al., 2010; Kruk et al., 2020). Here, via partial observability, we consider a version where each oscillator can see the distribution of relative phases of its neighbors. Finally, we implement the Kuramoto model on a random geometric graph (e.g. Diaz-Guilera et al., 2009) via omitting movement in its independent generalization, the Vicsek model (Vicsek et al., 1995; Vicsek & Zafeiris, 2012). Agents $j$ have two-dimensional position $p_j^t$ and current headings $\phi_j^t$, to be controlled by their actions. The key metric of interest for both Kuramoto and Vicsek is polarization via the polar order parameter $R = \frac{1}{N} \sum_j \exp(i \phi_j^t)$. Here, $R$ ranges from 0 – fully unsynchronized – to 1 – perfect alignment of agents. Experimentally, we consider various environments, such as the torus, Möbius strip, projective plane and Klein bottle. Importantly, agents only observe relative headings of others. Figure 3: Dec-POMFPPPO training curves (episode return) with shaded standard deviation over 3 seeds for $N = 200$ in (a) Aggregation; Vicsek on a (b): torus; (c): Möbius strip; (d): projective plane; (e): Klein bottle; and (f) Kuramoto on a torus. Figure 4: Training curves (episode return) with shaded standard deviation over 3 seeds and $N = 200$, in (a) Aggregation (box), (b) Vicsek (torus), (c) Kuramoto (torus). For comparison, we also plot the best return averaged over 3 seeds for Dec-POMFPPPO in Figure 3 (MF). Training results. In Figure 3 it is evident that the training process of MFC for many agents is relatively stable by guidance via MF and reduction to single-agent RL. In Appendix A, we also see similar results with significantly fewer agents and comparable to the results obtained with a larger number of agents. This observation highlights that the training procedure yields satisfactory outcomes, even in scenarios where the mean field approximation may not yet be perfectly exact. These findings underscore the generality of the proposed framework and its ability to adapt across different regimes. On the same note, we see by comparison with Figure 4, that our method is usually on par with state-of-the-art IPPO and MAPPO for many agents, e.g. here $N = 200$. Verification of theory. In Figure 5, as the number of agents rises, the performance quickly tends to its limit, i.e. the objective converges, supporting Theorem 1 and Corollary 1, as well as applicability to arbitrarily many agents. Analogously, conducting open-loop experiments on our closed-loop trained system in Figure 6 demonstrates the robust generality of learned collective behavior with respect to the randomly sampled initial agent states, supporting Theorem 3 and Corollary 2. Qualitative analysis. In the Vicsek model, as seen exemplarily in Figure 6 and Appendix A, the algorithm learns to align in various topological spaces. In all considered topologies, the polar order parameter surpasses 0.9, with the torus system even reaching a value close to 0.99. As for the angles at different iterations of the training process, as depicted in Figure 7, the algorithm gradually learns to form a concentrated cluster of angles. Note that the cluster center angle is not fixed, but rather changes over time. This behavior can not be observed in the classical Vicsek model, though extensions using more sophisticated equations of motion for angles have reported similar results (Kruk et al., 2020). For more details and further experiments or visualizations, we refer the reader to Appendices A–C. Figure 7 and additional figures, with similar results for other topologies in Appendix A, e.g. Figures 18–22, illustrate the qualitative behavior observed across the different manifolds. Agents on the continuous torus demonstrate no preference for a specific direction across consecutive training runs. Conversely, agents trained on other manifolds exhibit a tendency to avoid the direction that leads to an angle flip when crossing the corresponding boundary. Especially for the projective plane topology, the agents tend to aggregate more while aligning, even without adding another reward for aggregation. For Aggregation in Figure 8, we also find successful aggregation. Figure 5: The performance of the best of 3 Dec-POMFPPPO policies transferred to $N$-agent systems (in blue, error bars for 95% confidence interval), averaged over 50 episodes, and compared against the performance in the training system (in red). Problems (a)-(f) and training are as in Figure 3. Figure 6: A, B: For the Vicsek (torus) problem with forward velocity control, the open-loop behavior (B) shows little difference in performance of agents (rods, color indicating heading) over the closed-loop behavior (A). C: Visualization of agents (triangles) under the Vicsek model on the torus. Figure 7: A: Agent angle alignment in the Vicsek model on the torus, plotted as density over time; B: Alignment of agents in the Vicsek model on the projective plane, as in Figure 6. of agents in the middle. In practice, one may define any objective of interest. For example, we can achieve misalignment in Figure 8, resulting in polar order parameters on the order of magnitude of $10^{-2}$, and showing the generality of the framework. Additional experiments. Some other experiments are discussed in Appendix A, including the generalization of our learned policies to different starting conditions, a comparison of the Vicsek model trained or transferred to different numbers of agents, additional interpretative visualizations, similar success for the Kuramoto model, and a favorable comparison between RBFs and histograms for higher dimensions, showing the generality of the framework and supporting our claims. 5 Conclusion and Discussion Our framework provides a novel methodology for engineering artificial collective behavior in a rigorous and tractable manner, whereas existing scalable learning frameworks often focus on competitive or fully observable models (Guo et al., 2023; Zheng et al., 2018). We hope our work opens up new applications of partially-observable swarm systems. Our method could be of interest due to (i) its theoretical optimality guarantees while covering a large class of problems, and (ii) its surprising simplicity in rigorously reducing complex Dec-POMDPs to MDPs, with same complexity as MDPs from fully observable MFC, thus allowing analysis of Dec-POMDPs via a tractable MDP. The current theory remains limited to non-stochastic MFs, which in the future could be analyzed for stochastic MFs via common noise (Perrin et al., 2020; Cui et al., 2023; Dayanikli et al., 2023). Further, sample efficiency could be analyzed (Huang et al., 2023), and parametrizations for history-dependent policies using more general NNs could be considered, e.g. via hypernetworks (Ha et al., 2016; Li et al., 2023). Lastly, extending the framework to consider additional practical constraints and sparser interactions, such as physical collisions or via graphical decompositions, may be fruitful. Figure 8: A: Qualitative behavior for misalignment of agents in the Vicsek (torus) problem. B: The two-dimensional Aggregation problem, with agent distances to mean as colors. ACKNOWLEDGMENTS This work has been co-funded by the LOEWE initiative (Hesse, Germany) within the emergenCITY center and the FlowForLife project, and the Hessian Ministry of Science and the Arts (HMWK) within the projects "The Third Wave of Artificial Intelligence - 3AI" and hessian.AI. The authors acknowledge the Lichtenberg high performance computing cluster of the TU Darmstadt for providing computational facilities for the calculations of this research. REFERENCES Juan A. Acebrón, L. L. Bonilla, Conrad J. Pérez, Félix Ritort, and Renato Spigler. The Kuramoto model: A simple paradigm for synchronization phenomena. *Rev. Mod. Phys.*, 77(1):137–185, 2005. Alexandre Araujo, Aaron J Havens, Blaise Delattre, Alexandre Allauzen, and Bin Hu. A unified algebraic perspective on Lipschitz neural networks. In *Proc. ICLR*, pp. 1–15, 2023. Karl Johan Åström. Optimal control of Markov processes with incomplete state information. *J. Math. Anal. Appl.*, 10(1):174–205, 1965. Erkin Bahgeçi and Erol Sahin. Evolving aggregation behaviors for swarm robotic systems: A systematic case study. In *IEEE Swarm Intell. Symp.*, pp. 333–340, 2005. Lucas Barberis. Emergence of a single cluster in Vicsek’s model at very low noise. *Phys. Rev. E*, 98(3), 2017. Daniel S Bernstein, Robert Givan, Neil Immerman, and Shlomo Zilberstein. The complexity of decentralized control of Markov decision processes. *Math. Oper. Res.*, 27(4):819–840, 2002. Patrick Billingsley. *Convergence of probability measures*. John Wiley & Sons, 2013. Michael Breakspear, Stewart Heitmann, and Andreas Daffertshofer. Generative models of cortical oscillations: neurobiological implications of the Kuramoto model. *Front. Hum. Neurosci.*, 4:190, 2010. René Carmona. Applications of mean field games in financial engineering and economic theory. *arXiv:2012.05237*, 2020. René Carmona, François Delarue, and Daniel Lacker. Mean field games with common noise. *The Annals of Probability*, 44(6):3740–3803, 2016. René Carmona, Mathieu Laurière, and Zongjun Tan. Linear-quadratic mean-field reinforcement learning: convergence of policy gradient methods. *arXiv:1910.04295*, 2019a. René Carmona, Mathieu Laurière, and Zongjun Tan. Model-free mean-field reinforcement learning: mean-field MDP and mean-field Q-learning. *arXiv:1910.12802*, 2019b. René Carmona, Quentin Cormier, and H Mete Soner. Synchronization in a Kuramoto mean field game. *arXiv:2210.12912*, 2022. Louis-Pierre Chaintron and Antoine Diez. Propagation of chaos: A review of models, methods and applications. I. models and methods. *Kinet. Relat. Models*, 15(6):895–1015, 2022. Frank Cichos, Kristian Gustavsson, Bernhard Mehlig, and Giovanni Volpe. Machine learning for active matter. *Nat. Mach. Intell.*, 2(2):94–103, 2020. Ştefan Cobzaş, Radu Miculescu, and Adriana Nicolae. *Lipschitz functions*. Springer, 2019. Kai Cui, Christian Fabian, and Heinz Koepl. Multi-agent reinforcement learning via mean field control: Common noise, major agents and approximation properties. *arXiv:2303.10665*, 2023. Gokce Dayanikli, Mathieu Laurière, and Jiacheng Zhang. Deep learning for population-dependent controls in mean field control problems. *arXiv:2306.04788*, 2023.
JAfGlmRBTU
The introduction raises several points that existing models fail at, e.g., “the parse tree could switch among multiple reasonable forms even given a single scene”, i.e., “correct” parsing is context-dependent. While this is true, the proposed model also does not deal with this (or does it?)
REPRESENTING PART-WHOLE HIERARCHY WITH COORDINATED SYNCHRONY IN NEURAL NETWORKS Anonymous authors Paper under double-blind review ABSTRACT Human vision flexibly extracts part-whole hierarchy from visual scenes. However, how can a neural network with a fixed architecture parse an image into a part-whole hierarchy that potentially has a different structure for each image is a difficult question. This paper presents a new framework to represent the part-whole hierarchy by the hierarchical neuronal synchrony: (1) Neurons are dynamically synchronized into neuronal groups (of different timescales) to temporarily represent each object (wholes, parts, sub-parts, etc.) as the nodes of the parse tree. (2) The coordinated temporal relationship among neuronal groups represents the structure (edges) of the parse tree. Further, we developed a simple two-level hybrid model inspired by the visual cortical circuit, the Composer, which is able to dynamically achieve the emergent coordinated synchronous states given an image. The synchrony states are gradually created by the iterative top-down prediction / bottom-up integration between levels and inside each level. For evaluation, four synthetic datasets and three quantitative metrics are invented. The quantitative and qualitative results show that the Composer is able to parse a range of scenes of different complexities through dynamically formed neuronal synchrony. It is promising that the systematic framework proposed in this paper, from representation and implementation to evaluation, sheds light on developing human-like vision in neural network models. 1 INTRODUCTION Representing hierarchical structure is a key problem for neural networks. While there is strong evidence in psychology that people parse a visual scene into part-whole hierarchies with many different levels (e.g. scene level, object level, part level, sub-part level, sub-sub-part level, etc.) (Hinton, 1979; Kahneman et al., 1992; Thompson, 1980), the representation and manipulation of part-whole hierarchical information in fixed hardware is a profound challenge for artificial neural networks (Hinton, 2021). On the other hand, constructing such a part-whole hierarchy enables the neural networks to understand the visual scenes in a compositional way like human (Hinton, 2021), and facilitates the interpretability of the network representation (Garau et al., 2022). The part-whole hierarchy is an inclusion relationship and is conceptually organized as a parse tree since each part object (child node) should belong to a single whole object (parent node) (Hinton, 2021). Such compositional structure of multiple simultaneously presented objects of different levels profoundly complicates the problem of visual perception (Fig 1b). More importantly, the structure of the parse tree could switch among multiple reasonable forms even given a single scene and is likely to dynamically reform itself on the fly when the scene changes. Such dynamical and multi-stable nature challenges neural networks of fixed architecture (Hinton, 2021). Moreover, it renders simple feedforward networks (Deng et al., 2021) and supervised learning unlikely to ultimately conquer the problem (Greff et al., 2020). The challenge of the problem could be decomposed into three aspects: First (Node), how to dynamically group information that is distributed in neural networks to form each object representation that potentially acts as tree nodes? Second (Levels), how to distinguish node representations into the whole level and part level? Third (Edges), how to specify the relationship among whole-object representation and part-object representation as the edges in the parse tree. It is notable that when parsing different images, the three aspects should be achieved while keeping the network structure unchanged, e.g. the number of neurons. To solve the problem, we seek inspirations from the brain: First (Nodes), neuronal synchrony is exploited to dynamically group distributed information into object representation \cite{Malsburg1994,Singer2007} in a wide range of regions of the brain, so-called cell assemblies \cite{Palm1982,Buzsaki2010,Camera2019,Miehl2022} (Fig 1d, colored shadows). Second (Levels), the neocortex is spatially organized into hierarchical levels of columns (V1, V2, etc., Fig 2g), potentially corresponding to the levels of part-whole hierarchy \cite{Gross1972,Gross2002,Tsao2006,Hinton2021}. In other words, the level is explicitly distinguished by spatial separation (Fig 1d different colors along the y-axis and Fig 2g). Third (Edges), the temporal structure of neuronal activity (cell assemblies and neuronal oscillations) is organized into a ‘timescale hierarchy’ \cite{Manea2021}, of different frequency bands \cite{Buzsaki2004}, along the cortical hierarchy \cite{Mahjoory2019}. Moreover, the timescale hierarchy (from milliseconds to seconds) is related to information of hierarchical levels (e.g. words to sentences) in the neocortex and the transient nestedness (coordination) of different timescales indicates the presence of consciousness \cite{Northoff2017b}. Therefore, the nested relationship among parts and wholes (Fig 1b) could be represented as the nestedness among synchronized neuronal groups of hierarchical timescales in neural networks (Fig 1d). In this paper, we systematically study how to represent the part-whole hierarchy in neural networks through coordinated synchrony, from representation (framework) to implementation (model) and to evaluation (dataset and metric). We first develop a novel framework to deal with part-whole hierarchy at the representation level, where each object is represented as synchronized neuronal groups and the hierarchical relationship among objects is represented as the nestedness (coordination) among neuronal groups of different timescales. Then, at the implementation level, we provide a cortical-circuit-inspired model, called the Composer (short for COrtical-like eMergence of Part-whOle relationShip through nEuronal synchRony) to show how the hierarchical synchrony state is emerged given an input image. The Composer integrates spike timing dynamics into a deep learning framework to exploit the core advances of both sides. More specifically, the Composer consists of two levels of columns and each column contains a visible spike coding space (SCS), which is delay coupled by a denoising autoencoder (DAE) \cite{Vincent2008}. The coordinated synchrony is reached through iterative top-down prediction and bottom-up integration within each level and across different levels. In order to understand the representation of the Composer, four synthetic datasets of different complexities and three metrics to measure different aspects of the part-whole representation are invented to explicitly evaluate the emergent neuronal activity. Quantitative results and qualitative visualization confirm the validity of the Composer and the plausibility of the framework. Lastly, for comparison, we show that the Composer outperforms the SOTA, the Agglomerator \cite{Garau2022}, when the representation for the part-whole hierarchy is explicitly evaluated. The main contributions are as follows: (1) We developed a bio-plausible framework to deal with the part-whole hierarchy at the representation level. (2) We developed the Composer, integrating both deep learning (denoising autoencoder and self-supervised learning) and neuroscience (spiking code, dendritic computation, and rhythmic dynamics) to show how the coherent state emerges to represent the part-whole relationship. (3) We invented four synthetic datasets and three quantitative metrics to explicitly interpret the learned representation, which also shows that the Composer outperforms the recent state-of-the-art model. 2 FRAMEWORK AND INTUITIONS In this section, we develop the framework of how to represent the part-whole hierarchy with synchrony and provide intuitions to understand how the neuronal synchrony emerges, step by step. 2.1 REPRESENTATION Firstly, neurons have receptive fields that are selective for different features of objects; Secondly, the set of neurons responsive to features of the same object are dynamically synchronized into the neuronal group to form the object representation (Fig 1d, y-axis and colored shadows). Third, neurons are explicitly distinguished into columns of different levels (part/wholes, Fig 2g) and each column contains neurons that represent the objects at the respective level. Columns in higher levels have Figure 1: (a) The visual scene of a house. (b) The mental parse tree of the visual scene. (c)(d) Representing the parse tree as emergent neuronal synchrony. Synchronized neuronal groups are indicated by colored shadows in (d). Colors stand for levels in both (b) and (d). Neurons are indicated by selectivity along y-axis in (c),(d). Figure 2: How coordinated neuronal synchrony emerges. (a) Denoising autoencoder (DAE). (b) Legend for (c)(d). (c) Building up attractor dynamics by DAE (top and middle), which results in stationary population activity (bottom). (d) Building-up metastable rhythmic dynamics when spiking neurons and delay coupling show up. (e) The phase space (left) and population activity (right) of the whole system. Attractive basin is not shown for clarity. (f) General architecture of the Composer, which is highly inspired from the visual cortical circuit shown in (g). (h) Shared legend of (f)(g). longer timescales so that synchrony events are much sparser(Fig[1]). Fourth, the temporal inclusion relation (nestedness) of neuronal groups represents the inclusion relation among parts and wholes (see nested colored shadows in Fig[1]l and colored nodes in Fig[1]p), so that the part-whole hierarchy is represented as coordinated neuronal synchrony. 2.2 INTUITIONS OF THE MECHANISM But how could the coordinated temporal structure emerge in a neural network given a visual scene? The intuition starts from the close relationship between the denoising autoencoder and the attractor dynamics. As shown in Fig[2], denoising autoencoder (DAE) denoises noisy patterns. If it is exploited to parameterize a recurrent neural network so that $x_{t+1} = DAE(x_t)$, noisy pattern $x_0$ in the neighbourhood of original pattern $x$ will be ‘attracted’ to the original pattern by the recurrent dynamics (Fig[2]). $x_t$ is the network state at time step $t$. Therefore, a large number of attractors are explicitly embedded into the network dynamics by DAE. However, the attractor dynamics results in stationary population activity of a single set of active neurons (Fig[2], bottom). To represent multiple objects, the network needs to be metastable and it is where spiking dynamics and delay coupling show up. As shown in Fig[2] (top), if the neurons become spiking, their refractory period will prevent persistent firing so that the attracted states become transient. The delay coupling renders the dynamics non-Markovian and provides the essential time window for attracted states to switch (Fig[2] middle) so that the population activity becomes non-equilibrium and oscillatory (Fig[2] bottom). In other words, the synchronized neuronal groups to represent objects are transient attractive states of the network dynamics in nature. The same mechanism could be exploited to build up neuronal groups in both part and whole levels. The subtle difference is that the whole level has longer timescales so that its dynamics is slower than the part level. Provided that appropriate metastable dynamics is created in both part and whole levels so that all candidate object representations for the node of the parse tree are at hand, it is also important to coordinate the neuronal groups of the two levels to shape the parse tree. The general picture is that whole-level states condition the part-level states, e.g. by gating effect (Fig 2f, left, green arrow), so that during the ‘lifetime’ of each slow whole-level neuronal group (transient attractive state), corresponding fast part-level neuronal groups switch with smaller timescales (Fig 2f, left). On the other side, the temporal integration of part-level activity smooths out the finer-grained details and can in turn reinforce the whole-level states (Fig 2f, left, orange arrow). In a nutshell, vision in the Composer is a sampling process on an imagined energy landscape, which is shaped by both DAE and biological constraints like refractoriness, delay coupling, top-down gating, and bottom-up integration. The overall effect is to enforce the coordinated synchrony states as the local minimums of the entire dynamical system so that it is searched along the iterations. Once searched, hierarchical neuronal synchrony emerges as the population activity (Fig 2e, right). 3 MODEL Overall, the Composer consists of two levels of columns, interconnected by top-down modulation and bottom-up integration (Fig 2f). Each column contains a visible spiking layer, named spike coding space (SCS, Fig 3a), which is delay coupled by respective DAEs. The SCS of both levels has the same dimensions as the image \(d\), corresponding to the topographical mapping in neocortex (Kaas, 1997). The general architecture is inspired by the circuit organization in the visual cortex (Fig 2g). As shown in Fig 2g, layer 2/3 in the cortical column encode low-level features with sparser firings while layer 5/6 encode higher-level features with denser firings. The former is modeled as superficial spike coding space (SCS) and the latter is modeled as the real-valued latent space of DAE. Besides, the bottom-up integration and top-down modulation ‘inside each column’ are modeled as encoders and decoders of the DAEs (Fig 2g). More specifically, bottom-up integration is sensitive to spike timings, so-called coincidence detectors (König et al., 1996), which is modeled as an integrative function \(I\) (Fig 3b) before feeding SCS activities into DAEs. Besides, the top-down feedback modulates the activity of pyramidal cells in layer 2/3 by acting on distal synapses (away from the soma) (Sherman & Guillery, 1998). These dendritic computations are modeled as simplified pyramidal cells in SCSs (Fig 3b). In the following section, we dive into more details of the Composer step by step. 3.1 PART-LEVEL COLUMN Pyramidal cells in the visible SCS of part column receive inputs from three sources (Fig 3b): the input image \(x \in \{0, 1\}^d\), the inner-level feedback \(\gamma_1 \in \mathbb{R}^d\) and the inter-level feedback \(\Gamma \in \mathbb{R}^d\): \[ \rho_1(t) = x \cdot \gamma_1 \cdot \Gamma \] where ‘\(\cdot\)' is pixel-wise and \(\rho_1\) is the firing rate, which determines the firing activity \(s_1 \in \{0, 1\}^d\). \[ P(s_1 = 1) = \rho_1(t) \cdot g_1(t - \hat{t}), \quad t \in [0, T]. \] where \(g_1(t - \hat{t})\) is the relative refractory function of neurons in the part level (Fig 3b) and \(\hat{t}\) is the timing of the latest spike firing event of each neuron. As shown in Fig 3b, after firing a spike, the neuron goes into an absolute refractory period of timescale \(\tau_{r1}\) and then a relative refractory period of timescale \(\tau_{r1} - \tau_{r1}\), where firing probability is inhibited by a factor \(g < 1\). The inner-level feedback \(\gamma_1\) in eq(1) is the denoised output of the DAE \(G_1 \circ F_1\), yet with delay \(\tau_d\): \[ \gamma_1 = \text{DAE}_1((I_1 * s_1)(t - \tau_d)). \] where \(*\) is the convolution operator and \(I_1\) is the integrative function for \(s_1(t)\), of timescale \(\tau_1\) (Fig 3b). In a word, the spiking activity \(s_1(t)\) in the visible SCS is integrated within a short time window \(\tau_1\) before fed into the DAE, and the feedback from DAE to SCS is delayed by \(\tau_d\). 3.2 WHOLE-LEVEL COLUMN Since the whole-level column is the top level in the current two-level Composer, it does not receive top-down modulation from even higher levels. Besides, the image has a partial influence on SCS in the whole-level column through skip connections (Fig3.), which is also common in the cortical circuit: $$\rho_2 = (\lambda \cdot x + (1 - \lambda) \cdot D) \cdot \gamma_2$$ (4) where $\lambda < 1$ is the factor of partial influence from skip connection. $D$ is the integrated driving input from the part-level column. $\rho_2$ determines the spike firing probability by: $$P(s_2 = 1) = \rho_2(t) \cdot g_2(t - \hat{t})$$ (5) Lastly, the delayed feedback from $DAE_2$ is also similar to eq[3] $$\gamma_2 = DAE_2((I_2 * s_2)(t - \tau_d))$$ (6) ### 3.3 LINKING THE LEVELS Up to now, we have introduced operations within each column of the Composer except for two variables: $\Gamma$ and $D$, which are interactions between levels: $$\Gamma(t) = (I_\Gamma * s_2)(t - \tau_{d'}) \quad \text{and} \quad D(t) = (I_D * s_1)(t).$$ (7) where $\tau_{d'}$ is the delay timescale from whole-level to part-level. $\tau_\Gamma, \tau_D$ in the $I_\Gamma, I_D$ is the timescale of integration function. It is notable that: (1) While only two levels are considered in this paper for simplicity, the Composer can be naturally extended to account for more levels (e.g., Up to five levels are enough for human vision (Hinton [2021])) (2) While the inter-level projection could account for a wide range of computational goals like coordinate transformation (Hinton [2021]), in this paper, we only focus on a minimal realization as pixel-wise gating (eq[1]) and driving (eq[4]) between SCSs since we aim to demonstrate how to group information through temporal coherence to represent hierarchical structures in neural networks. Further computational goals like coordinate transformation can be realized in the future by parameterizing the inter-level pathway also as neural networks (Hinton [2021]). ### 4 EVALUATION As far as we know, most related works on the part-whole hierarchy are evaluated on images containing only a single object without a clear part-whole relationship (e.g., MNIST) (Hinton et al. [2018]). Therefore, it is difficult, if not impossible, to distinguish the representation from general feature extraction (Garau et al. [2022]) or object-centric attention (Sun et al. [2021]), which are much easier problems. The lack of explicit part-whole datasets and quantitative metrics to measure the representation hinders the development of models capable of visual parsing. This challenge motivates us to invent datasets and metrics to explicitly evaluate the Composer. ![Figure 3](image-url) **Figure 3:** (a)(b) Pyramidal neuron models in the visible spike coding space of part level and whole level. ⊗ stands for multiplication and ⊕ stands for addition on the dendrites. (c) Detailed information flow. Note that delayed coupling exists both within each column and between different columns. Levels are indicated by color. (d) The legend for (c). (e) Relative refractory function $g$. (f) Integration function $I_i(t)$ of timescale $\tau_i$. ![Figure 4](image-url) **Figure 4:** Examples in datasets (a) Ts (b) SHOPs (c) Squares (d) Double-digit MNIST. Top, input. Middle / Bottom, ground truth of wholes/parts. Similar color is used for parts of the same whole. Figure 5: Different Scores measure different aspects of the part-whole hierarchy. (a) Ideal parsing (spike raster plot of two levels, similar to Fig[1]) given the input in (b) (GT stands for ground truth in (b)). Only a single period of the oscillatory pattern is drawn in (a) for clarity. From (c) to (f): We perturb the ideal representation in (a) on different aspects (title on the top) and of different levels (x-axis of bottom figures) to further show what the score measures and their sensitivity. Top, perturbed spiking pattern; Bottom, scores as functions of the perturbation level (Orange: whole score; Blue: part score; Green: coordination score.). Dashed vertical line indicates the perturbation level where the spiking pattern (top) is drawn. 4.1 DATASET We invent four synthetic part-whole datasets of different complexities (Fig 4), each containing 60000 samples. Each image consists of multiple whole objects, each of which is further composed of well-defined parts. Whole objects are randomly located in the image. The DAE in part / whole level columns are trained to denoise single part / whole objects (Appendix A.7). T’s dataset (Fig 4a) consists of three letter T and three reversed letter L as whole-level objects. Each T or L is composed of a horizontal bar segment and a vertical bar segment as parts. T’s dataset has relatively large whole number, but each whole has small part number. Similar stimuli are used as target templates in perceptual tasks like visual search in neuroscience literature [Wolfe, 2021]. Squares dataset (Fig 4b) consists of three randomly-located squares as wholes, each of which consists of four corners. The objects in dataset have relatively more parts. Besides, it could demonstrate the role of spatial connected-ness / closure in forming the parsing tree. Similar stimuli are used to study illusory contour [Lee & Nguyen, 2001] in Gestalt perceptual tasks in psychology literature. SHOPs, short for (Shoes (Fig 4b-i), House (Fig 4b-ii), Opera (Fig 4b-iii)) consists of three types of whole objects that are further composed of more elementary rectangular and triangles. Each image contains three randomly selected and located objects. This dataset accounts for the complexity that parts could heavily overlap with each other to construct a whole object. Overlapped regions are not assigned to either object at part level in the ground truth (Fig 4b, bottom). Double-Digit MNIST (Fig 4d) mimics the more realistic scenes when dealing with double-digit numbers. Each image contains two randomly selected and located double digits, and each double-digit is composed of two randomly selected, closely located MNIST digits. This dataset contains objects of higher complexity and diversity. 4.2 SCORES The neural representation of a parse tree can be decomposed into 3 characteristics: (1) The grouping of part-level objects (child node, blue circles in Fig 5a); (2) The grouping of whole level objects (parent node, orange circle in Fig 5a); (3) The coordination among parts and wholes (Edges, green box in Fig 5a). Since all three aspects are coherence measures of clusters in nature, we exploit Silhouette Score [Rousseeuw, 1987] to develop the metrics: (1) Part Score (2) Whole Score and (3) Coordination Score to measure the three aspects based on ground truth segmentation. See Appendix A.4 for more details. To understand how scores work, we perform perturbation studies. As shown in Fig 5, all scores decrease smoothly from 1 to 0, when the ideally structured pattern gradually gets globally perturbed, finally into total random firing (e.g. Fig 1c). If only the part level is perturbed, the Whole Figure 6: Emergence of the part-whole hierarchy with coordinated neuronal synchrony. Exemplified by one SHOPs sample. (a) Input image and ground truth; (b) Evolution of the Scores. (c) The spike raster plot of three selected phases in (b): phase I (initial, green box), phase II (middle, yellow box), phase III (final, red box). $s_2(t), s_1(t)$ stand for spiking representations in SCSs of whole/part levels. (d) zoomed in spiking pattern during the period marked by black box in (c), to visualize what each synchronized group represents (e.g. yellow/green boxes in (c),(d)and(a)). (e) Evolution of the top-down attention maps and the local field potential (LFP) during the three phases in (b). Score remains constant while both the Part Score and Coordination Score are decreased from 1 to 0 (Fig[5]). Results are inversed when only the whole level is perturbed (Fig[5]). Lastly, if we only perturb the relative order of parts and wholes, each of which is perfectly grouped as in (a). It is shown that only the Coordination Score decreases saliently while Part / Whole Scores remain mostly unchanged. Since the perturbed order of well-grouped spikes results in systematic wrong assignment of clusters (Fig[5] top), the score decreases to even lower than 0 (Fig[5] bottom). Taken together, Part Score and Whole Score evaluate the build-up of tree nodes, which is the ‘pre-requisite’ to represent parse trees and the Coordination Score further evaluates the structure of the tree. Following the Silhouette Score, the best score (coherence) is 1 and the worst score (incoherence) is -1. A score near 0 indicates randomness like Fig[1]. 5 EXPERIMENTS 5.1 QUALITATIVE RESULTS AND VISUALIZATION Emergence of the parse tree in SCS. We visualize the simulation on a randomly selected sample in the SHOPs dataset in Fig[6]. As indicated by the convergence of Part Score, Whole Score and Coordination Score (Fig[6]), the Composer gradually achieves a state of neuronal coherence that represents the parts and wholes as synchronized neuronal groups, which is further visualized in Fig[6]: right and Fig[6]. More specifically, three two-level binary-tree (corresponding to three SHOPs objects in Fig[6]) periodically emerges in the final phase III (Fig[6]: right), one of which is marked out by one yellow (for whole object) and two green (for part objects) boxes. The spikes of parts/wholes in Fig[6]: are reordered (on the y-axis) and colored corresponding to the ground truth of parts/wholes (Fig[6]) for more vivid visualization, so that the same neuronal groups are arranged closely and the color of spikes is consistent with the ground truth. For more visualization results, see Appendix[A.10.7]. Figure 7: Similar to Fig6 right, but for other datasets: (a) Ts (b) Squares (c) Double-Digit MNIST. Left: input image, part ground truth and whole ground truth. Right: spike raster plot in final phase III, top for part level and bottom for whole level. Neuronal groups are circled for clarification. Figure 8: Convergence of Scores. (a) SHOPs (b) Squares (c) Ts (d) Double-Digit MNIST. Emergence of the DAE attention map is observed in Fig6: Starting from randomness (left), the cross-level feedback $\Gamma$ or inner-level feedback from the DAEs $\gamma_1/\gamma_2$ gradually converge to structured patterns, similar to the spiking patterns (Fig6 right), yet of longer timescales. Therefore, the top-down attention from DAEs and bottom-up integrations of spikes work together as a whole system in Composer. Besides, rhythmic population activity ($LFP_1$) emerges (Fig6e) at the part level. Visualization results on other datasets are shown in Fig7. Interestingly, the emergent synchrony structure differs across the datasets. While in the Ts dataset (consists of 6 Ts), 6 binary trees emerge periodically, 3 quadtrees emerge in the Squares dataset (consists of 3 Squares). In Double-Digit MNIST (consists of 2 Double-MNISTs), 2 binary trees emerge. Taken together, the Composer successfully and flexibly represents the part-whole hierarchy of scenes of different complexities. 5.2 Quantitative Analysis Convergence of the scores during iterations are evaluated on 100 randomly selected samples in each dataset and are shown in Fig8. Interestingly, while scores consistently converge on all datasets with low error bars, the convergent process slightly differs across cases. For Squares (Fig8b), whole objects group much faster than part objects, similar to human vision (Lee & Nguyen, 2001). For Ts (Fig8f), the large object number imposes combinatorial burdens on the coordination, so that the Coordination Score lags behind. For Double-Digit MNIST (Fig8i), Composer has more difficulties in distinguishing part-level MNISTs, partially due to the diversity of the dataset. See Appendix A.10.5. Benchmarking. We compare the Composer with a recently implemented SOTA, the Agglomerator (Garau et al., 2022), which also attempts to exploit the idea of neuronal coherence (similarity among vectors) to group neuronal representation (at different levels) into tree nodes (islands of vectors). 1000 random samples and 5 random seeds are used to evaluate the Composer and the Agglomerator on the four datasets. The 3 coherence-based metric is naturally generalized to evaluate the Agglomerator. As shown in Fig9, the Composer outperforms the Ag- glomerator on all datasets. Actually, the Agglomerator even failed to form the node representation as pre-requisites. See Appendix A.5 for more details on benchmarking. **Loss vs Scores.** Since denoised feedback from DAEs is an essential mechanism in the Composer, it is instructive to examine the relationship between the denoising performance of DAE and the parsing scores. For this purpose, we trained 100 DAEs with the same architecture on the SHOPs dataset with random learning rates, and then performed parsing using each of them. Fig.10(a) shows the relationship between the denoising loss and parsing scores. It is observed that lower loss positively correlates with higher scores on all metrics, indicating that there is a direct interplay between denoising and parsing. **Ablation study of timescale parameters** are shown in Fig.10(b), where parameters are set to zeros in isolation. For example, \( g = 0 \) stands for the removal of the relative refractory period. Compared with the original model, all ablated models have lower scores. Specifically, the removal of delay \( \tau_d \), whole-level refractory period \( \tau_{r2} \), and cross-level feedback delay \( \tau_{d2} \) has the destructive effect, indicated by reversed Coordination Score. Besides, changes at the part level affect Part Score more than the Whole Score (e.g., \( \tau_1, \tau_{r1} \)). The removal of relative refractory period \( g \) slightly degrades the coordination and the removal of cross-level integration \( \tau_D \) globally degrades all scores. (Appendix A.10.4) 6 RELATED WORK **Object-centric representation** is a line of research that explores how to bind distributed information into single-level objects as reusable entities in neural networks [Greff et al., 2015; 2016; 2017; 2019; 2020; Locatello et al., 2020], some of which also exploit the idea of neuronal synchrony [Zheng et al., 2022; Löwe et al., 2022]. While single-level object representations are essential prerequisites to form building blocks (nodes), they can’t account for hierarchical structures like part-whole hierarchy. **Graph neural network (GNN)** can explicitly represent part-whole relationships as a specific type of graph of patches. However, their architecture either is not fixed but changed with the grouped object number [Bear et al., 2020] or needs an object detector to transform the image into node representations [Xu et al., 2017]. In contrast, we study how to implicitly represent part-whole hierarchy with neuronal activities in neural networks of fixed architecture that directly process the image, which is more consistent with human vision. Besides, the over-smoothing phenomenon limits the depth of part-whole levels being represented in GNNs [Han et al., 2022]. **Hierarchical latent variable models** can explicitly capture the tree structure within its latent space [Deng et al., 2021]. However, current models are feedforward networks without iterative message passing as the Composer, which limits their potential to ultimately conquer the problem. **Other visual parsers** include capsule-like [Hinton et al., 2018; Garau et al., 2022], transformer-based [Sun et al., 2021], and recursive neural programmer [Fisher & Rao, 2022], etc. The common weaknesses of these works are the evaluations: single-object datasets without clear part-whole relationships (e.g. MNIST) are used and the evaluation lacks metrics to measure the parsing. Therefore, it is unlikely to distinguish the proposed part-whole representation from feature extraction or single-level object-centric representation. In contrast, the Composer’s representation is interpreted explicitly. 7 CONCLUSION We present Composer, together with the framework of representation, physical intuition, biologically inspired implementation and explicit evaluation. Results show that Composer uses emergent neuronal synchrony to parse a range of scenes of distinct composite structures, complexities and diversities. REFERENCES Moshe Abeles. Role of the cortical neuron: integrator or coincidence detector? *Israel journal of medical sciences*, 18:1:83–92, 1982. André Moraes Bastos, W. Martin Usrey, Rick A Adams, George R. Mangun, Pascal Fries, and Karl J. Friston. Canonical microcircuits for predictive coding. *Neuron*, 76:695–711, 2012. Daniel Bear, Chaofei Fan, Damian Mrowca, Yunzhu Li, Seth Alter, Aran Nayebi, Jeremy Schwartz, Li Fei-Fei, Jiajun Wu, Joshua B. Tenenbaum, and Daniel L. K. Yamins. Learning physical graph representations from visual scenes. *ArXiv*, abs/2006.12373, 2020. Lars Buesing, Johannes Bill, Bernhard Nessler, and Wolfgang Maass. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons. *PLoS computational biology*, 7(11):e1002211, 2011. György Buzsáki. Neural syntax: Cell assemblies, synapsembles, and readers. *Neuron*, 68:362–385, 2010. György Buzsáki. The brain from inside out. 2019. György Buzsáki and Andreas Draguhn. Neuronal oscillations in cortical networks. *Science*, 304:1926 – 1929, 2004. Giancarlo La Camera, Alfredo Fontanini, and Luca Mazzucato. Cortical computations via metastable activity. *Current Opinion in Neurobiology*, 58:37–45, 2019. URL https://api.semanticscholar.org/CorpusID:195069199 Peter Dayan and L. F. Abbott. Theoretical neuroscience: Computational and mathematical modeling of neural systems. 2001. Fei Deng, Zhu Zhi, Donghun Lee, and Sungjin Ahn. Generative scene graph networks. In *International Conference on Learning Representations*, 2021. Rodney J. Douglas and Kevan A. C. Martin. Neuronal circuits of the neocortex. *Annual review of neuroscience*, 27:419–51, 2004. Simon B. Eickhoff, R. Todd Constable, and B. T. Thomas Yeo. Topographic organization of the cerebral cortex and brain cartography. *NeuroImage*, 170:332–347, 2017. Andreas Karl Engel and Wolf Singer. Temporal binding and the neural correlates of sensory awareness. *Trends in Cognitive Sciences*, 5:16–25, 2001. Andreas Karl Engel, Pascal Fries, and Wolf Singer. Dynamic predictions: Oscillations and synchrony in top–down processing. *Nature Reviews Neuroscience*, 2:704–716, 2001. Ares Fisher and Rajesh P. N. Rao. Recursive neural programs: Variational learning of image grammars and part-whole hierarchies. *ArXiv*, abs/2206.08462, 2022. Karl J. Friston. The free-energy principle: a unified brain theory? *Nature Reviews Neuroscience*, 11:127–138, 2010. Nicola Garau, Niccoló Bisagno, Zeno Sambugaro, and Nicola Conci. Interpretable part-whole hierarchies and conceptual-semantic relationships in neural networks. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13679–13688, 2022. Wulfram Gerstner, Werner M. Kistler, Richard Naud, and Liam Paninski. Neuronal dynamics: From single neurons to networks and models of cognition. 2014.
GDdxmymrwL
There is some notable performance gap of *Corex* variants. Such as *Corex-Review-Code* v.s. other variants for GSM-Hard in Table 2, and for Repeat Copy in Table 4. Any intuition or explanations on this?
Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model Collaboration Anonymous authors Paper under double-blind review Abstract Large Language Models (LLMs) are evolving at an unprecedented pace and have exhibited considerable capability in the realm of natural language processing (NLP) with world knowledge. Benefiting from ultra-large-scale training corpora, a single LLM can manage typical NLP tasks competently. However, its performance in executing complex reasoning tasks is still confined by the limitations of its internal representation. To push this boundary further, we introduce Corex in this paper, a suite of novel general-purpose strategies that transform LLMs into autonomous agents, pioneering multi-model collaborations for complex task-solving. Inspired by human behaviors, Corex is constituted by diverse collaboration paradigms including Debate, Review, and Retrieve modes, which collectively work towards enhancing the factuality, faithfulness, and reliability of the reasoning process. These paradigms foster task-agnostic approaches that enable LLMs to “think outside the box,” thereby overcoming hallucinations and providing better solutions. Through extensive experiments across four different types of reasoning tasks, we demonstrate that orchestrating multiple LLMs to work in concert yields substantially better performance compared to existing methods. Further results and in-depth analysis demonstrate the cost-effectiveness of our method, facilitating collaboration among different LLMs and promoting annotation efficiency. Our code and data are available at https://anonymous.4open.science/r/Corex. “A problem shared is a problem halved.” —English Proverb 1 Introduction Large Language Models (LLMs) have succeeded in advancing the state-of-the-arts for a series of Natural Language Processing (NLP) tasks (Brown et al., 2020; Chowdhery et al., 2022; OpenAI, 2023; Touvron et al., 2023; Zhao et al., 2023a inter alia). Recent research (Wei et al., 2022a) indicates that scaling up models (Kaplan et al., 2020) can yield improvements in both performance and sample efficiency across a broad spectrum of downstream tasks. Notwithstanding their remarkable proficiency in language understanding and instruction following (Ouyang et al., 2022), the reasoning abilities of LLMs, often seen as a hallmark for assessing their potential, still present challenges (Suzgun et al., 2023; Huang & Chang, 2023). Concurrently, there is a prevailing view that merely increasing the size might not adequately address their inherent limitations in solving reasoning tasks (Rae et al., 2022). In response to this challenge, Wei et al. (2022b) put forth chain-of-thought (CoT) prompting that an LLM generates a series of intermediate steps toward a final answer, contrasting the use of “answer-only” prompts. Subsequently, various approaches have been put forward, such as self-consistency decoding (Wang et al., 2023d) which utilizes a majority voting mechanism to determine the final answer, and program-aided language models (PAL; Gao et al., 2022; Chen et al., 2022a) that leverage code generation to reduce errors in computations. Besides, curated prompts necessitate task-specific designs (Zheng et al., 2023a) have also been utilized to elicit more accurate predictions. Nevertheless, these approaches are confined within a static black box (Yao et al., 2023b), wherein the LLM relies exclusively on its internal representation for generating responses and is prone to generating unreliable answers (Ji et al., 2023; Yin et al., 2023). These shortcomings underscore that relying solely on crafting decoding strategies and specialized prompts may not serve as a silver bullet for addressing... complex reasoning tasks (Qiao et al., 2023). Alternatively, enabling models to “think outside the box” emerges as a promising yet underexplored pathway. Within the realm of well-established sociological concepts, multiple cognitive processes interact and cooperate will produce a combined effect that is greater than the sum of their individual contributions (Luppi et al., 2022). This principle is echoed within artificial intelligence (Li et al., 2023a). Although the study of intelligent agents has been explored for decades (Minsky, 1988; 2007), the advent of LLMs has rejuvenated interest and introduced novel challenges in this domain. An emerging perspective is that encouraging collaboration and communication between models could potentially pave the way for a new stage for enhancing complex reasoning capabilities. In this study, we propose Corex, a suite of human-inspired strategies that leveraging multi-model collaboration to elicit reasoning for complex task-solving. To facilitate synergies between models, we first assign distinct personas to different models, followed by the design of various collaborative paradigms. This collective intelligence-based method aims to conquer prevalent obstacles in the current landscape of reasoning, as exemplified in Figure 1. It also endeavors to alleviate common issues observed in majority voting-based methods like self-consistency, where accurate responses might be overwhelmed by incorrect ones and exorbitant costs. To be specific, Corex configures LLMs as a group of autonomous agents, adopting the paradigms shown in Figure 2 for multi-model collaboration: (1) Debate, utilizing group-based debates among models to effectively enhance the factuality (Du et al., 2023) of generated content and minimize fallacies and hallucinations; (2) Review, enabling models to scrutinize reasoning chains or generated codes from their counterparts to ensure the correctness of generated contents, coupled with potential refinements; (3) Retrieve, aiming to enable the model to identify the most faithful option from a pool of candidate chains, facilitates a higher degree of alignment with the final response. The comparison between Corex and recent works is listed in Table 1, where our approach is task-agnostic, requiring no prior knowledge or iterative processes during the reasoning phase, which makes it broadly applicable to a wide array of scenarios. We conduct extensive experiments across four types of tasks: mathematical reasoning, symbolic reasoning, commonsense reasoning, and semi-structured reasoning. The results illustrate that our method achieves substantial performance gains over previous strong baselines. Moreover, each mode distinctly excels in different categories of tasks, showcasing its specific strengths. Further analysis reveals that, compared to existing schemes based on majority voting and curated prompts, Corex significantly reduces the reasoning overhead of the models, achieving cost-effectiveness. Table 1: A comparison of Corex to other recent prompting strategies. | Feature | Corex (our work) | MAD (Liang et al., 2023) | PHP (Zheng et al., 2023a) | CoK (Wang et al., 2023b) | ToT (Yao et al., 2023a) | |------------------|------------------|--------------------------|---------------------------|-------------------------|------------------------| | Task Agnostic? | ✓ | ✗ | ✗ | ✓ | ✓ | | Multiple Chains? | ✓ | ✗ | ✗ | ✓ | ✓ | | Multiple LLMs? | ✓ | ✓ | ✗ | ✗ | ✗ | | Task Delegation? | ✓ | ✗ | ✗ | ✗ | ✗ | | Reference Free? | ✓ | ✓ | ✓ | ✗ | ✓ | 2 RELATED WORKS Chain-of-Thought Prompting Elicits LLM Reasoning. Chain-of-Thought (CoT; Wei et al., 2022b) prompting, as one of the celebrated capabilities of recent LLMs, is a pivotal breakthrough for performing complex multi-step reasoning when provided with limited examples. Further variants show that CoT can be improved by adding certain “magic phrases” (Kojima et al., 2022), automated demonstrations construction (Zhang et al., 2023a), reasoning in different modalities (Zhang et al., 2023b; Yang et al., 2023; Yao et al., 2023c), and applying modular approaches (Khott et al., 2023). For robustness, researchers transform problems into interleaved reasoning chains (Zhou et al., 2023; Lyu et al., 2023) or adopt ensembling (Wang et al., 2022). Notably, self-consistency methods (Wang et al., 2023d) select answers from multiple reasoning paths by majority voting, have greatly elevated the performance of LLMs in complex reasoning. This approach has been further optimized by utilizing prompts with higher complexity (Fu et al., 2023c). Lately, Yao et al. (2023a) employ heuristic-guided search on “trees” constructed from thoughts to assist LLMs in navigating the problem space. External Knowledge & Tool Utilization for LLM Reasoning. While LLMs exhibit significant capabilities, they are limited by a lack of real-world grounded experience (Petroni et al., 2020) and an inability to grasp complex arithmetic reasoning, given that their training is exclusively based on written text. Thus, researchers start utilizing external knowledge to assist models in accomplishing reasoning tasks (Nakano et al., 2022; Schick et al., 2023). For enhanced factuality and faithfulness, He et al. (2022) and Wang et al. (2023b) make use of external knowledge bases. Lately, Gao et al. (2023) ensure the factual correctness and verifiability of generated text by providing cited passage. Another line is to delegate reasoning tasks to external tools (Qin et al., 2023), which are commonly used for addressing numerical problems. One of the representatives is program-aided Language model (Gao et al., 2022), known as PAL. Such an approach utilizes LLMs to interpret NL problems, generating programs as intermediate reasoning steps (Chen et al., 2022a) that will be offloaded to a Python interpreter for execution to get final solutions (Ni et al., 2023). This method transforms reasoning into an NL2Code (Zan et al., 2023) task and has been demonstrated to excel when dealing with larger, non-integer numbers and enabling error corrections (Olausson et al., 2023). Beyond synthesizing programs, Liu et al. (2023a) integrate a computational physics engine into the language modeling process for simulation. Moreover, Chameleon (Lu et al., 2023a) augments LLMs by incorporating both tools and knowledge resources like web engines and image captioners. Multi-Model Synergy for Task Solving. Utilizing multiple LLMs collectively to solve problems is still in its preliminary stages, with a wealth of opportunities awaiting exploration. The cornerstone of collaboration is constructing a human-like reasoning architecture (Zhu et al., 2023) for LLMs under different environments (Liu et al., 2023b). Fu et al. (2023b) investigate whether multiple LLMs can autonomously enhance their performance through mutual interactions. Du et al. (2023) and Liang et al. (2023) explore enhancing the factuality of specific tasks, e.g., translation and arithmetic reasoning, by facilitating “debates” among multiple models. LLMs’ collaboration has also been applied to software development (Qian et al., 2023) and text evaluation (Chan et al., 2023) by assigning identities to models to simulate the development process. Furthermore, from the perspective of social intelligence, inducing cognitive synergy and having them take on different characters (Wang et al., 2023e) during task execution has been proven to have significant potential (Sclar et al., 2023). Recently, the nascent exploration into artificial societies (Park et al., 2023) also seeks to harness collective intelligence to emulate the efficiency of human social structures (Li et al., 2023a; Webb et al., 2023). --- 1The idea of integrating LLMs with external PL interface was proposed by Gao et al. (2022) and Chen et al. (2022a) within the same timeframe. We refer to this approach as “PAL” in this paper. 3 Corex We introduce the three main components of Corex in this section, namely the Debate, Review, and Retrieve modes. Let us assume a set of LLM-based agents \( \{A_1, A_2, \ldots, A_n\} \) participating in multi-model collaboration. Each agent \( A_i \) generates the corresponding reasoning chain \( c_i \) and its prediction \( p_i \) when facing a query \( q \). 3.1 Debate In Debate mode, our agents are divided randomly into two groups, the Red Team and the Blue Team, with one reserved as a judge denoted as \( A_j \). The debate process within one team involves several rounds, limited to a maximum of \( T \) rounds of communications. In each round \( t (t = 1, 2, \ldots, T) \), the agents engage in iterative discussions\(^2\) to refine their reasoning chains and predictions. This dynamic interaction \( g \), allows for the continual modification of viewpoints, as expressed by \( c_i^t = g(q, c_{i-1}, \ldots, c_{i-k}) \) and predictions \( p_i^t \). Each team then presents their refined predictions \( p_{\text{red}}^t \) and \( p_{\text{blue}}^t \) at the end of each round. If both teams consistently agree throughout the debate process, i.e., \( p_{\text{red}}^t = p_{\text{blue}}^t \), the debate concludes smoothly. However, in the instance of a discrepancy between the teams’ predictions, every output from each round is presented to the judge \( A_j \). The judge employs a decision-making process \( h \), evaluating the quality and reliability of the reasoning chains and predictions from each round of the debate. The final conclusion is determined by \( h(c_{\text{red}}^t, p_{\text{red}}^t, c_{\text{blue}}^t, p_{\text{blue}}^t) \) across all rounds, ensuring a comprehensive assessment and a more informed final decision. Diverging from previous works ([Liang et al., 2023], [Du et al., 2023], [Xiong et al., 2023]), the debate mode of Corex adopts the concept of group discussions to enhance the factuality of reasoning chains. We opt not to facilitate models in jointly debating their reasoning processes to converge on a single common answer for several reasons: (1) The context length limitations inhibit the ability to fully hold the entire debate process, (2) Despite the tendency of debates to converge to single final answers, these outcomes are not always correct due to incorrect consensus or prevalent biases ([Wang et al., 2023]), (3) Given the performance gaps among various LLMs, there is a risk of strong models “monopolizing” the debate, thereby overshadowing the insights from others. Therefore, we aim to preserve both the factuality and the diversity of thoughts among agents and ensure stability throughout the debate process. 3.2 Review Within the scope of reasoning, both CoT and PAL are effective methods with distinct strengths. Grounded in natural language, CoT-based methods stand out for the generality and the clarity of explanations. In contrast, facilitated by programs, PAL guarantees computational accuracy ([Zhao et al., 2023b]). However, they both exhibit drawbacks due to the reliance on LLMs’ internal representations. For CoT and its variants, issues are twofold: (1) Cumulative errors, where mistakes tend to amplify and propagate throughout the reasoning chain; and (2) A plateau in text quality that cannot be substantially improved through prompting ([Xu et al., 2022], [Li et al., 2023b]). Alternatively, PAL faces its own challenges: (1) LLMs might misinterpret questions, which inadvertently results in technically correct yet misguided programs; and (2) Generated codes are not always error-free: LLMs may potentially write buggy codes, such as referencing undefined variables or engaging in “Division by Zero” operations. Inspired by recent efforts of LLMs peer-rating ([Zheng et al., 2023b]) and collaborative coding practices prevalent in software engineering, we introduce the Review mode to address the aforementioned issues through collaboration. --- \(^2\)Due to the context length limit of GPT-3.5-Turbo, only information from the previous round is stored during the debate process. To be specific, a single agent $A_p$ is randomly selected to act as the primary agent. Initially, $A_p$ takes the responsibility of formulating corresponding reasoning chains for $q$ along with the prediction, and crafting codes if required. This initial collection of solutions is represented as $S_p^{(0)} = \{a_p, c_p, m_p\}$, where $a_p$, $c_p$, and $m_p$ signify the answer, reasoning chain, and codes respectively. $S_p^{(0)}$ is then subjected to iterative reviews by the other agents that function as reviewers in a sequential manner, rigorously scrutinizing both the reasoning chain and the code formulated by $A_p$ or modified by preceding reviewers. It is crucial to highlight that each reviewer receives input from its predecessors, signifying that each subsequent review is grounded on the outcomes and feedback of the preceding ones, fostering a progressively refined solution. The reviewing process is formalized as $S_p^{(i+1)} = R_i(S_p^{(i)}, F_i)$, where $R_i$ encapsulates the review outcome at the $i^{th}$ iteration and $F_i$ represents the feedback received. In essence, the solution set $S_p^{(i+1)}$ results from an enhancement of its preceding version $S_p^{(i)}$, informed by the feedback $F_i$. Following the completion of all review iterations, the outcome is determined by the final iteration of the solution set $S_p^{(n-1)}$. Specifically, the final prediction $c_p^{(n-1)}$ is chosen as the answer for $q$, and in instances where code is involved, the last revised version $m_p^{(n-1)}$ is executed by a Python interpreter to produce the outcome. ### 3.3 RETRIEVE In the final thread of work, we delve into the Retrieve mode to identify the most faithful answer through collaborations. While previous strategies based on majority voting mechanism (Wang et al., 2023d; Fu et al., 2023c) can mitigate the low-diversity issue of techniques such as beam-search (Li & Jurafsky, 2016), they still present the following two significant challenges: 1. Correct answers risk being swayed by incorrect ones. 2. Despite facilitating a notable enhancement in performance, it exponentially escalates the computational burden and tends to reach a performance “saturation point” as the sampled chains increase. We attribute these drawbacks to the limited scope of majority voting techniques that singularly prioritize the prediction while overlooking the faithfulness of reasoning chains (Li et al., 2023c). In response, we propose the Retrieve mode, a paradigm specifically engineered to evaluate whether the answer can be expressed by the content (explanation) generated during reasoning (Vacovi & Goldberg, 2020; Lanham et al., 2023). Concretely, given a query $q$, we randomly select an agent $A_r$ from the pool of $n$ agents to act as the retriever. The remaining agents $\{A_1, A_2, \ldots, A_{n-1}\}$ independently perform CoT reasoning about $q$. Each of these agents derives its own reasoning chains $c_i$ and corresponding predictions $p_i$. Together, they form a candidate pool, denoted by $\mathcal{P} = \{(c_i, p_i)\}_{i=1}^{n-1}$. The retriever $A_r$ then scrutinizes the candidates in $\mathcal{P}$. For $(c_i, p_i)$, $A_r$ evaluates the faithfulness between $c_i$ and $p_i$. Based on this assessment, the retriever assigns a confidence score $s_i$ in the range $[0, 1]$, which is denoted as: $s_i = f_r(c_i, p_i)$ where $f_r$ indicates the retriever’s evaluation process. After that, the most faithful response to the question $q$ is then determined by the highest confidence: $$ (c^*, p^*) = \arg\max_{(c_i, p_i) \in \mathcal{P}} s_i $$ Here, \((c^*, p^*)\) denotes the chain-prediction pair that the retriever considers most faithful, which will serve as the final answer for the query \(q\). Retrieve mode enables the selection of the most aligned combination of reasoning chains and answers from a diversified candidate pool. Distinct from previous text quality assessment methods, which rely on the log probability of sequences (Adiwardana et al., 2020) that is computationally inefficient and often unavailable for commercial LLMs, our approach is entirely predicated on model-to-model interactions (Chen et al., 2023) and is reference-free. 4 EXPERIMENT 4.1 EXPERIMENTAL SETUP Tasks and Datasets. We evaluate the effectiveness of Corex across four types of reasoning tasks: (1) Arithmetic reasoning over eight mathematical problems, which includes GSM8K (Cobbe et al., 2021), MultiArith (Roy & Roth, 2015), SingleOP/SingleEQ (Koncel-Kedziorski et al., 2016), AddSub (Hosseini et al., 2014), AQuA (Ling et al., 2017), SVAMP (Patel et al., 2021) and GSM-Hard (Gao et al., 2022). (2) Commonsense reasoning covering four datasets, including StrategyQA (Geva et al., 2021), CommonsenseQA (CSQA, Talmor et al., 2019), BoolQ (Clark et al., 2019) and AI2 Reasoning Challenge (ARC-c) (Clark et al., 2018). (3) Symbolic reasoning incorporating four tasks derived from BigBench (bench authors, 2023) (Suzgun et al., 2023), including Date Understanding, Penguins in a Table, Colored Objects, and Repeat Copy. (4) Semi-structured understanding, with a focus on FinQA (Chen et al., 2021b), ConvFinQA (Chen et al., 2022b) and TAT-QA (Zhu et al., 2021). The detailed description and statistics of tasks are listed in Appendix D. Baselines. We compare our method with several widely used strong baselines. (1) Chain-of-Thought prompting (CoT; Wei et al., 2022b). (2) Self-Consistency (CoT-SC; Wang et al., 2023d), which employs a majority voting mechanism to select the most consistent answer from several reasoning chains as the final answer. (3) Complexity-based consistency (ComplexCoT; Fu et al., 2023c) that selects the majority answer from the candidates with higher reasoning complexity. (4) Program-aided language model (PAL; Gao et al., 2022; Chen et al., 2022a) that uses LLMs to generate programs as intermediate reasoning steps, while offloading the computation to a Python interpreter. For simplicity and ease of understanding, we denote CoT-SC(x) and ComplexCoT(x) in our experiments and analysis to represent cases utilizing different reasoning paths, where “x” indicates the number of output chains. For all baseline methods, we adhere to the few-shot exemplars to ensure fair comparisons. Details can be found in Appendix B. Implementation Details. We access OpenAI and Anthropic models through their respective APIs. Specifically, we employ GPT-3.5-Turbo-0613 for evaluating both Corex and baseline methods in the main experiments. Moreover, in further experiments and analysis involving different LLMs for collaboration, we also incorporate the use of GPT-4-0613 and Claude-Instant-1.2. The details of prompts and hyperparameter settings for both baselines and Corex are in Appendix F. 4.2 MAIN RESULTS We report the results of Corex over four categories of tasks. For each kind of task, the best results are highlighted in bold and the second best results are marked with underline. For Review mode, we use Corex-Review\(_{NL}\) and Corex-Review\(_{Code}\) to describe the scenarios that use CoT or PAL respectively. All modes within Corex are configured to operate with 5 LLM-based agents, ensuring favorable cost-effectiveness. For Corex-Debate, the upper bound of debate rounds is set to 5. Mathematical Reasoning. Table 2 shows the results across arithmetic tasks with varying difficulties. Our method achieves notable performance improvements on most benchmarks. Broadly, we surpass the performance of CoT-SC(10) when only 5 agents are involved. Moreover, given the task-agnostic nature of Corex, it can tackle highly complex computational challenges like GSM-Hard through code synthesis. For problems of relatively lower complexity, the Retrieve mode can identify answers superior to those from majority voting. Table 2: Comparison of accuracy on seven mathematical reasoning datasets using various Corex modes and strong baselines. | | GSM8K | SVAMP | MultiArith | SingleOP | SingleEQ | AddSub | GSM-Hard | Avg. | |----------------|-------|-------|------------|----------|----------|--------|----------|------| | CoT | 74.5 | 78.9 | 98.5 | 94.1 | 93.3 | 87.8 | 39.0 | 80.9 | | ComplexCoT | 79.7 | 80.7 | 97.3 | 94.3 | 92.3 | 86.8 | 39.7 | 81.5 | | CoT-SC(10) | **82.8** | **84.5** | **99.8** | **95.4** | **95.1** | **89.6** | **45.2** | **84.6** | | PAL | 76.0 | 83.4 | 96.7 | 90.7 | 95.8 | 87.6 | 62.1 | 84.6 | | Corex-Debate | 76.2 | 82.6 | 98.7 | 94.8 | 93.7 | 89.7 | 45.9 | 83.1 | | Corex-ReviewNL | 80.3 | 83.2 | 99.5 | 95.0 | 94.3 | 89.4 | 50.8 | 84.6 | | Corex-ReviewCode | 79.2 | **85.8** | 98.3 | 93.6 | **96.9** | 89.6 | **63.6** | **86.7** | | Corex-Retrieve | 82.5 | 85.6 | **99.8** | **96.1** | 96.6 | **90.9** | 53.0 | 86.3 | Commonsense Reasoning. Table 3 showcases the performance of Corex in commonsense and factual reasoning tasks.\(^3\) We can observe that various modes contribute to performance enhancements. Table 3: Comparison of performance on commonsense & factual reasoning between various Corex modes and strong baselines. | | StrategyQA | CSQA | OpenBookQA | BoolQ | ARC-c | Avg. | |----------------|------------|------|------------|-------|-------|------| | CoT | 65.3 | 76.7 | 82.6 | 65.1 | 84.2 | 74.8 | | ComplexCoT | 63.1 | 77.5 | - | - | - | - | | CoT-SC(10) | 67.1 | 78.1 | 85.2 | 66.6 | 85.7 | 76.5 | | Corex-Debate | 68.4 | **78.9** | 83.4 | 66.9 | **86.3** | 76.8 | | Corex-ReviewNL | 66.9 | 77.4 | 84.8 | 66.9 | 86.0 | 76.4 | | Corex-Retrieve | **69.3** | 77.7 | **87.6** | **68.0** | 85.5 | **77.6** | Notably, our approach surpasses ComplexCoT (over 6% on StrategyQA), achieving a significant improvement without resorting to intricate prompt design and example selection. Symbolic Reasoning. We report the results for symbolic reasoning in Table 4. Empirical evidence substantiates that adopting multi-model collaboration can notably outperform most previous baselines on Big-Bench tasks. It is noteworthy that (1) CoT-SC struggles to ensure consistent outputs on the Repeat Copy. Conversely, through the integration of PAL-based collaboration, we manage to attain a remarkably high level of accuracy. (2) Compared to majority voting, both the Review and Retrieve modes enable more judicious answer selection in counting tasks. Table 4: Comparison of accuracy on five symbolic reasoning datasets from Big-Bench \([bench authors, Suzgun et al., 2023]\) using various Corex modes and other strong baselines. | | Date | Penguin | Colored Objects | Repeat Copy | Avg. | |----------------|------|---------|-----------------|-------------|------| | CoT | 82.0 | 81.5 | 88.0 | 43.8 | 73.8 | | CoT-SC(10) | **87.9** | 86.2 | **94.8** | 53.1 | 80.5 | | PAL | 81.2 | 91.3 | 86.8 | 93.8 | 88.3 | | Corex-Debate | 83.2 | 85.9 | 91.2 | 62.5 | 80.7 | | Corex-ReviewNL | 84.0 | 92.0 | 92.4 | 59.4 | 82.0 | | Corex-ReviewCode | 82.7 | **93.3** | 91.6 | **96.9** | **91.1** | | Corex-Retrieve | 84.6 | 92.6 | **95.6** | 68.8 | 85.6 | Semi-structured Reasoning. We demonstrate the results on FinQA and ConvFinQA in Table 5. It can be observed that for these two challenging tasks which require understanding heterogeneous information and performing calculations simultaneously \([Lu et al., 2023b]\), methods such as CoT-SC offer limited gains. However, through various cooperative paradigms, significant performance improvements can be achieved. Due to the context length restriction of GPT-3.5-Turbo, our experiments on TAT-QA utilized GPT-3.5-Turbo-16k, with the respective results being detailed in Appendix C.1 alongside the evaluations on the other tasks. \(^3\)Due to the nature of commonsense reasoning tasks, the Review mode only utilizes NL reasoning chains. Following our extensive experiments across 18 tasks, it emerges that the Debate mode is competent for tasks utilizing factual knowledge. For mathematical and counting tasks, the Review mode serves to effectively mitigate errors within the reasoning chains and repair flawed code. Across various tasks, the Retrieve mode consistently facilitates performance improvements to varying degrees. 5 ANALYSIS In this section, we first aim to make the collaboration process transparent by delving into models’ internal behaviors. Then, the influence of different backbones is examined to observe how model capability affects performance. Further, we assess the efficiency of Corex. 5.1 IN-DEPTH ANALYSIS OF Corex STRATEGIES Analysis of Interaction Rounds in Debate Mode. We study the number of rounds of communication in the Debate mode of Corex on five tasks, as depicted in Figure 6. Consensus can be reached swiftly for the majority of problems by each team. However, Corex enables LLMs to engage in more exhaustive discussions for problems that are challenging to reach a consensus on (e.g., over 10% of ConvFinQA problems requiring more than 3 rounds), a small proportion of problems require more interactions. Through observation, we also notice that the Debate mode exhibits favorable convergence properties, wherein the interactive process serves as a basis for the judge’s decision-making. ![Figure 6: Distribution of the number of debate rounds required to reach consensus.](image) Performance Enhancement per Review. We explore the incremental performance gains achieved in specific tasks with each review cycle in the Review mode. As is demonstrated in Figure 7, we conduct analyses for Repeat Copy and GSM8K with Review\textsubscript{Code}, as long as BoolQ and Penguin with Review\textsubscript{NL}. The findings indicate that each review contributes to performance enhancement in general, yet occasional deviations leading to performance oscillations are also observed. ![Figure 7: Performance gains across multiple rounds of review](image) 5.2 SYNERGIES BETWEEN DIFFERENT LLMs Performance Variability with Diverse LLMs as Judges. The backbone LLMs of our agents can be diverse. In this part, we discuss the performance variations when employing different LLMs during the debate process. As shown in Figure 8, we deploy GPT-3.5-Turbo as debaters and examine the dynamics when different LLMs take the role of judges. The observations indicate that the capability of the judge positively correlates with task performance, with this relationship being evident as the complexity of tasks escalates. Empirically, this can be attributed to the judge’s role in the debate process, which requires understanding both the question and the reasoning process of both parties. Utilizing Different LLMs as Retrievers. In Retrieve Mode, the role of the retriever can be played by various LLMs. Based on the candidate answers from GPT-3.5-Turbo agents, we here explore the | | FinQA | ConvFinQA | Avg. | |------------------|-------|-----------|------| | CoT | 46.1 | 50.4 | 48.3 | | CoT-SC(10) | 52.7 | 57.2 | 54.9 | | PAL | 54.3 | 50.8 | 52.9 | | Corex-Debate | 50.2 | 56.7 | 53.5 | | Corex-Review\textsubscript{NL} | 52.5 | 52.3 | 52.4 | | Corex-Review\textsubscript{Code} | **55.9** | **54.2** | **55.1** | | Corex-Retrieve | 55.4 | **57.7** | **56.6** | impact of model selection on the performance, as depicted in Figure 9. Unlike the debate mode, our analysis reveals that the model capabilities exert a modest effect on the performance. Given that the performance upper bound is determined by the candidates’ capabilities, the outcomes using different LLMs as retrievers show minimal variance on tasks like ARC-c. Notably, our findings indicate that without the need for especially potent models as retrievers, we can still achieve favorable results. 5.3 COST-EFFECTIVENESS OF MULTI-MODEL COLLABORATIONS By encouraging collaboration between LLMs, we manage to reduce the costs associated with reasoning tasks while achieving comparable or even superior performance. Based on our analysis conducted on AddSub illustrated in Figure 10, it reveals that all three modes of Corex consistently match or surpass the prowess of other strong baselines. Significantly, the computational cost of our approach are substantially diminished in comparison to methods using majority voting. In achieving equivalent performance, the resource consumption of Corex is confined to a mere 5-10% of that expended by other strategies. To substantiate the generality, we’ve provided additional experiments in Appendix C.2 which further demonstrate a similar trend. Beyond the efficiency of computational costs, another advantage of Corex is its annotation efficiency, which reduces the reliance on curated demonstrations. Further experiments with varying numbers of demonstrations on this aspect can be found in Appendix C.3. 6 CONCLUSION We introduce Corex in this paper; a suite of strategies that transform LLMs into autonomous agents, thereby leveraging multi-model collaboration for complex reasoning. This offers a preliminary exploration into the LLM-based multi-model ecosystems. Through unlocking the synergies among LLMs, Corex empowers reasoning with enhanced factuality, faithfulness, and reliability through various collaboration paradigms. We conduct extensive evaluations across 18 tasks within 4 categories, and the results demonstrate superior performance compared to previous solutions. Moreover, our methods also exhibit multiple notable advantages including being task-agnostic, cost-effective, and annotation-efficient. We hope that this work may serve as a foundation for further research, offering novel perspectives in complex reasoning, collective intelligence, and autonomous agents. REFERENCES Daniel Adiwardana, Minh-Thang Luong, David R. So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, and Quoc V. Le. Towards a human-like open-domain chatbot. *CoRR*, 2020. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. *ArXiv preprint*, 2022. BIG bench authors. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. *Transactions on Machine Learning Research*, 2023. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In *NeurIPS*, 2020. Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan Liu. Chateval: Towards better llm-based evaluators through multi-agent debate, 2023. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde, Jared Kaplan, Harri Edwards, Yura Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. *arXiv preprint arXiv:2107.03374*, 2021a. Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks, 2022a. Yi Chen, Rui Wang, Haiyun Jiang, Shuming Shi, and Ruifeng Xu. Exploring the use of large language models for reference-free text quality evaluation: An empirical study, 2023. Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena Shah, Iana Borova, Dylan Langdon, Reema Moussa, Matt Beane, Ting-Hao Huang, Bryan Routledge, and William Yang Wang. FinQA: A dataset of numerical reasoning over financial data. In *Proc. of EMNLP*, 2021b. Zhiyu Chen, Shiyang Li, Charese Smiley, Zhiqiang Ma, Sameena Shah, and William Yang Wang. ConvFinQA: Exploring the chain of numerical reasoning in conversational finance question answering. In *Proc. of EMNLP*, 2022b. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways, 2022. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In *Proc. of NAACL*, 2019. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. *ArXiv*, 2018. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems, 2021. Ali Dorri, Salil S. Kanhere, and Raja Jurdak. Multi-agent systems: A survey. *IEEE Access*, 2018. Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. Improving factuality and reasoning in language models through multiagent debate. *arXiv preprint arXiv:2305.14325*, 2023.
ijoqFqSC7p
- From looking at the generated videos, although the proposed method can more cleanly generate longer videos, it seems that the spatial structure of the video (e.g. location of a cat) is very similar throughout the entire video. I believe this may be due to the repetitive nature of shuffled noise reptitions which are generally highly correlated with the structure of the resulting video. So it seems that the method may have a hard time generating more dynamic changes in long videos, such as a cat walking across the screen or scene / camera changes. Could the authors comment on this, or if there are generated video examples with larger structural changes through the video?
FREE NOISE: TUNING-FREE LONGER VIDEO DIFFUSION VIA NOISE RESCHEDULING Haonan Qiu1*, Menghan Xia2*, Yong Zhang2, Yingqing He2,3, Xintao Wang2, Ying Shan2, Ziwei Liu1* 1Nanyang Technological University 2Tencent AI Lab 3Hong Kong University of Science and Technology ABSTRACT With the availability of large-scale video datasets and the advances of diffusion models, text-driven video generation has achieved substantial progress. However, existing video generation models are typically trained on a limited number of frames, resulting in the inability to generate high-fidelity long videos during inference. Furthermore, these models only support single-text conditions, whereas real-life scenarios often require multi-text conditions as the video content changes over time. To tackle these challenges, this study explores the potential of extending the text-driven capability to generate longer videos conditioned on multiple texts. 1) We first analyze the impact of initial noise in video diffusion models. Then building upon the observation of noise, we propose FreeNoise, a tuning-free and time-efficient paradigm to enhance the generative capabilities of pretrained video diffusion models while preserving content consistency. Specifically, instead of initializing noises for all frames, we reschedule a sequence of noises for long-range correlation and perform temporal attention over them by window-based fusion. 2) Additionally, we design a novel motion injection method to support the generation of videos conditioned on multiple text prompts. Extensive experiments validate the superiority of our paradigm in extending the generative capabilities of video diffusion models. It is noteworthy that compared with the previous best-performing method which brought about 255% extra time cost, our method incurs only negligible time cost of approximately 17%. Generated video samples are available at our website: http://haonanqiu.com/projects/FreeNoise.html. 1 INTRODUCTION Diffusion models bring breakthrough developments in image generation (Rombach et al., 2022), enabling users without any art background to easily create unique and personalized designs, graphics, and illustrations based on specific textual descriptions. Building upon this success, there is a growing interest in extending this concept to video generation (He et al., 2022; Ge et al., 2023; Blattmann et al., 2023; Wang et al., 2023c; Luo et al., 2023). As targeting for modeling higher dimensional data, video diffusion model demands a notably increased requirement in model capacity and data scale. As a result, current video diffusion models are generally trained on a small number of frames. Consequently, during the inference stage, the quality of the generated video tends to decrease as the length of the video increases due to longer videos are not supervised during the training stage. One straightforward approach is to generate video fragments of the same length as the training videos and then stitch them together, eliminating the training-inference gap. However, this method results in disconnected and incoherent fragments. To address this issue, the fragments can be fused during the denoising process and smoothly connected in the final video (Wang et al., 2023a). However, the long-distance fragments often have a large content gap to fuse and thus it struggles to maintain the content consistency in the long video. Although some auto-regressive-based methods (Villegas et al., 2022) get rid of this problem by progressively generating the next frame, content consistency is still hard to guarantee due to the error accumulation. *Corresponding Authors In VideoLDM (Blattmann et al., 2023), the generated frame depends not only on the initial noise for the current frame but also on the initial noises for all frames. This means that resampling the noise of any frame will significantly influence other frames due to the full interaction facilitated by the temporal attention layers. This makes it challenging to introduce new content while maintaining the main subjects and scenes of the original video. To address this challenge, we inspect the temporal modeling mechanism of VideoLDM, where the temporal attention module is order-independent, whereas the temporal convolution module is order-dependent. Our experimental observation indicates that the per-frame noises serve as a foundation for determining the overall appearance, while their temporal order influences the content built upon that foundation. Motivated by this, we propose FreeNoise, a tuning-free and time-efficient paradigm to achieve longer video inference. The key idea is to construct a sequence of noise frames with long-range correlation and perform temporal attention over them by the way of window-based fusion. It mainly contains two key designs: Local Noise Shuffling and Window Based Attention Fusion. By applying the local noise shuffling to a sequence of fixed random noise frames for length extension, we achieve a sequence of noise frames with both internal randomness and long-range correlation. Meanwhile, the window-based attention fusion enables the pre-trained temporal attention modules to process frames of any longer length. Particularly, the overlapped window slicing and merging operation only happens in temporal attention while introducing no computation overhead to other modules of the VideoLDM, which benefits the computational efficiency significantly. In addition, most video generation models (Blattmann et al., 2023; Luo et al., 2023; Ge et al., 2023) only utilize a single-text condition to control the video even when multi-text conditions are given. For instance, the sentence “A man sleeps on the desk and then reads the book” which contains two stages but only one condition will be reflected in the generated video. This limitation arises from the fact that the training dataset usually contains only a single-text condition. However, in a single-shot scene, the main subject usually involves multiple actions. To address the challenge of generating videos based on multiple prompts without tuning the pretrained models, we propose Motion Injection. This approach leverages the characteristics of diffusion models, where different time steps recover varying levels of information (image layout, shapes of the objects, and fine visual details) during the denoising process (Patashnik et al., 2023; Zhang et al., 2023). It gradually injects new motion during the time steps associated with object shapes, following the completion of the previous motion. Importantly, this design does not introduce any additional inference time. Our contributions are summarized as follows: 1) We investigate the temporal modeling mechanism of video diffusion models and identify the influence of initial noises. 2) We design a tuning-free paradigm for longer video generation, which outperforms existing state-of-the-art notably in both video quality and computational efficiency. 3) We propose an effective motion injection approach that achieves multi-prompt long video generation with decent visual coherence. 2 RELATED WORK 2.1 VIDEO DIFFUSION MODELS Latent Diffusion Models (LDM). Diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020) are generative models that formulate a fixed forward diffusion process to gradually add noise to the data \( x_0 \sim p(x_0) \) and learn a denoising model to reverse this process. The forward process contains \( T \) timesteps, which gradually add noise to the data sample \( x_0 \) to yield \( x_t \) through a parameterization trick: \[ q(x_t | x_{t-1}) = N(x_t; \sqrt{1 - \beta_t} x_{t-1}, \beta_t I), \quad q(x_t | x_0) = N(x_t; \sqrt{\alpha_t} x_0, (1 - \bar{\alpha}_t) I) \] where \( \beta_t \) is a predefined variance schedule, \( t \) is the timestep, \( \bar{\alpha}_t = \prod_{i=1}^{t} \alpha_i \), and \( \alpha_t = 1 - \beta_t \). The reverse denoising process obtains less noisy data \( x_{t-1} \) from the noisy input \( x_t \) at each timestep: \[ p_\theta(x_{t-1} | x_t) = N(x_{t-1}; \mu_\theta(x_t, t), \Sigma_\theta(x_t, t)) \] Here \( \mu_\theta \) and \( \Sigma_\theta \) are determined through a noise prediction network \( e_\theta(x_t, t) \), which is supervised by the following objective function, where \( \epsilon \) is sampled ground truth noise and \( \theta \) is the learnable network parameters. \[ \min_\theta \mathbb{E}_{x_t, x_0, \epsilon} \| \epsilon - e_\theta(x_t, t) \|_2^2 \] Once the model is trained, we can synthesize a data \( x_0 \) from random noise \( x_T \) by sampling \( x_t \) iteratively. Recently, to ease the modeling complexity of high dimensional data like images, Latent Diffusion Model (LDM) (Rombach et al., 2022) is proposed to formulate the diffusion and denoising process in a learned low-dimensional latent space. It is realized through perceptual compression with an autoencoder, where an encoder \( E \) maps \( x_0 \in \mathbb{R}^{3 \times H \times W} \) to its latent code \( z_0 \in \mathbb{R}^{4 \times H' \times W'} \) and a decoder \( D \) reconstructs the image \( x_0 \) from the \( z_0 \). Then, the diffusion model \( \theta \) operates on the image latent variables to predict the noise \( \hat{e} \). \[ z_0 = E(x_0), \quad \hat{x}_0 = D(z_0) \approx x_0, \quad \hat{e} = e_\theta(z_t, y, t), \] The network is a sequence of the following layers, where \( h \) represents the hidden feature in a certain layer and \( y \) denotes conditions like text prompts. Conv and ST are residual convolutional block and spatial transformer, respectively. \[ h' = \text{ST(Conv}(h, t), y), \quad \text{ST} = \text{Proj}_{\text{in}} \circ (\text{Attn}_{\text{self}} \circ \text{Attn}_{\text{cross}} \circ \text{MLP}) \circ \text{Proj}_{\text{out}}, \] Video Latent Diffusion Model (VideoLDM) (Blattmann et al., 2023) extends LDM to video generation and trains a video diffusion model in video latent space. The \( z_0 \in \mathbb{R}^{4 \times N \times H' \times W'} \) becomes 4 dimensions, and \( \theta \) consequently becomes temporal-aware architecture consisting of basic layers as the following equation, where Tconv denotes temporal convolutional block and TT denotes temporal transformers, serving as cross-frame operation modules. \[ h' = \text{TT(ST(Tconv(Conv}(h, t)), y)), \quad \text{TT} = \text{Proj}_{\text{in}} \circ (\text{Attn}_{\text{temp}} \circ \text{Attn}_{\text{temp}} \circ \text{MLP}) \circ \text{Proj}_{\text{out}}; \] Following the same architecture, some similar text-to-video models have been proposed (Blattmann et al., 2023; Wang et al., 2023b), primarily differing in training strategies or auxiliary designs (such as fps conditioning, image-video joint training, etc.). AlignYourLatent (Blattmann et al., 2023) is designed to train only the temporal blocks based on a pre-trained text-to-image model (i.e., Stable Diffusion (SD) (Rombach et al., 2022)). In contrast, ModelScope (Wang et al., 2023b) is proposed for fully training the entire model with a SD checkpoint pre-loaded. ### 2.2 Long Video Generation Generating long videos poses challenges due to the increased complexity introduced by the temporal dimension, resource limitations, and the need to maintain content consistency. Many GAN-based methods (Skorokhodov et al., 2022; Brooks et al., 2022; Ge et al., 2022) and diffusion-based methods (Harvey et al., 2022; Voleti et al., 2022; Yu et al., 2023; He et al., 2022; Yin et al., 2023; Ho et al., 2022) are proposed to generate long videos. Despite their advantages, those approaches necessitate extensive training on large long video datasets. Recently, a tuning-free method, Gen-L-Video (Wang et al., 2023a) is proposed and successfully extends the video smoothly by merging some overlapping sub-segments into a smoothly changing long segment during the denoising process. However, their content consistency lacks preservation due to the large content gap among those sub-segments. Benefiting from the design of noise rescheduling, our paradigm FreeNoise preserves content consistency well in the generated long videos. Meanwhile, Gen-L-Video costs around 255% extra inference time while FreeNoise only costs 17% additional inference time approximately. Another demand in long video generation is multi-prompt control, as a single-text condition is often insufficient to describe content that evolves over time. While some recent works (Yin et al., 2023; He et al., 2023; Wang et al., 2023a) have explored this direction, they introduce a new lens when a new prompt is provided. Phenaki (Villegas et al., 2022) utilizes an auto-regressive structure to generate one-shot long videos under multi-text conditions but suffers from noticeable content variation. In our paradigm, we can generate multiple motions while preserving the main subjects and scenarios. ### 3 Methodology Given a VideoLDM pre-trained on videos with a fixed number of \( N_{\text{train}} \) frames, our goal is to generate longer videos (e.g., \( M \) frames where \( M > N_{\text{train}} \)) without compromising quality by utilizing it for inference. We require the generated \( M \) video frames to be semantically accurate and temporally coherent. In the following sections, we will first study the temporal modeling mechanism that challenges VideoLDM in generating longer videos. Subsequently, we will introduce our efficient, tuning-free approach to overcome these challenges. To further accommodate multi-prompt settings, we propose a motion injection paradigm to ensure visual consistency. Figure 1: Challenges of longer video inference. The random noises $\epsilon_1$ and $\epsilon_2$ have the same number of frames as the model was trained on. All the results are generated under the same text prompt: “a man is boating on a lake”. 3.1 Observation and Analysis Attentive-Scope Sensitivity. For longer video generation via VideoLDM, a straightforward solution is to feed $M$ frames of random noises to the model for video generation through iterative denoising steps. Unfortunately, it fails to generate desired result, as the example illustrated in Figure 1(b). The reason is easy to understand: the temporal attention modules perform global cross-frame operations that make all frames attentive to each other, however they are strictly trained to attend on $N_{train}$ neighbor frames and struggle to handle more frames properly. In this case, the generated videos tend to cause semantic incompleteness or temporal jittering. Noise-Induced Temporal Drift. To bypass the issue above, one may argue to employ temporal sliding windows so that the temporal attention module can always process a fixed number of frames. Indeed, this solution makes desired content with a smooth temporal transition. However, it struggles to maintain the long-range visual consistency, as exemplified in Figure 1(d). To identify the underlying causes, we explore the temporal modeling mechanism that consists of two kinds of cross-frame operations: temporal attention and temporal convolution. Temporal attention is order-independent, whereas temporal convolution is order-dependent. When temporal convolutions are removed, the output video frames hold a strict correspondence with the initial noise frames, irrespective of shuffling. In contrast, depending on the noise frame order, the temporal convolution introduces new content to ensure the output video’s temporal continuity. Figure 2 demonstrates such phenomena. It implies the conjecture that the per-frame noises serve as a foundation for determining the overall appearance, while their temporal order influences the content built upon that foundation. So, it is challenging for the temporal modules to achieve global coherence when independently sampled noises are combined for longer video generation. 3.2 Noise Rescheduling for Long-Range Correlation To circumvent the challenges mentioned above, we propose a noise rescheduling paradigm for longer video inference. The key idea is to construct a sequence of noise frames with long-range correlation and perform temporal attention over them by the way of window based fusion. To gain semantically meaningful and visually smooth videos, the model inference should satisfy two basic Figure 3: Overview of our proposed method. Given $N_{\text{train}}$ frame of random noise, we first extend it to the target $M$ frames as the initial noise $z_T$ through noise rescheduling. Then, in the iterative denoising process, the multi-prompt injection paradigm is conducted in the spatial cross-attention layers (where $t$ denotes timestep, $l$ denotes layer number, $P$ denotes text prompt) and the sliding window based attention fusion is performed in temporal self-attention layers. requirements: (i) the temporal attention only accepts fixed $N_{\text{train}}$ frames, to bypass the attentive-scope sensitivity issue; (ii) every $N_{\text{train}}$ frames of features fed to the temporal attention always correspond to $N_{\text{train}}$ frames of independent and identically distributed noises, otherwise the generation fails because of the out-of-distribution input. Specifically, we propose two effective designs to achieve this goal. **Local Noise Shuffle Unit.** To acquire a video with $M$ frames ($M > N_{\text{train}}$), we initialize $N_{\text{train}}$ frames of random noise $\{\epsilon_1, \epsilon_2, ..., \epsilon_{N_{\text{train}}}\}$ independently and reschedule them for the remaining length: $$[\epsilon_1, \epsilon_2, ..., \epsilon_{N_{\text{train}}}, \text{shuffle}(\epsilon_1, \epsilon_2, ..., \epsilon_S), ..., \text{shuffle}(\epsilon_{S(i+1)}, \epsilon_{S(i+2)}, ..., \epsilon_{S(i+S)}), ...],$$ (7) where $S$ denotes the size of the local shuffle unit and is a divisor of $N_{\text{train}}$. $S_i = i \mod N_{\text{train}}$, and $i$ is the frame index. The operator $\text{shuffle}(\cdot)$ denotes shuffling the order of the frame sequence. Through such a rescheduling strategy, we achieve a sequence of noise frames with both internal randomness and long-range correlation. Note that, the randomness introduced by temporal shuffle has considerable capacity to bring about content variation, as evidenced by Figure 2. **Window based Attention Fusion.** Given longer initial noise frames, the spatial modules of VideoLDM process them frame-wisely and the temporal convolution processes the frames in a sliding window, which is the same case as they were trained. Differently, the temporal attention is performed in a global manner and frames longer than $N_{\text{train}}$ triggers attentive-scope sensitivity. So, we need to deal with the computation of temporal attention so that it can process the longer sequence in the same way as it was trained. Specifically, instead of calculating the temporal attention over all frames, we only calculate temporal attention within each local sliding window of size $U = N_{\text{train}}$: $$F^j_{i:i+U} = \text{Attn}_{\text{temp}}(Q_{i:i+U}, K_{i:i+U}, V_{i:i+U}) = \text{Softmax} \left( \frac{Q_{i:i+U}K^T_{i:i+U}}{\sqrt{d}} \right) V_{i:i+U},$$ (8) where $i$ is the frame index and $j$ is the window index. Here, we take the sliding stride as the same value as $S$ (the size of noise shuffle unit), so that each sliding window just covers $N_{\text{train}}$ frames of independent and identically distributed noises, i.e. $\{\epsilon_1, \epsilon_2, ..., \epsilon_{N_{\text{train}}}\}$ with a shuffled order. Figure 3 illustrates the diagram of our attention computation. As each frame is involved in the attention computation of multiple local windows, we need to fuse these attentive outputs to achieve a smooth temporal transition. According to our experiments, naively taking the average will cause dramatic variation in the boundaries of windows. Therefore, we propose to fuse the window-based outputs in a temporal smooth manner, namely computing the weighted sum by taking the frame index distance from each window center as weights: $$F^o_i = \sum_j F^j_i \ast \left( \frac{U}{2} - ||i - c^j|| \right) \sum_j \left( \frac{U}{2} - ||i - c^j|| \right),$$ (9) where $|| \cdot ||$ denotes absolute value, and $c^j$ is the central frame index of the $j$-th window that covers frame $i$. $F^o_i$ is the output of the current temporal attention layer. Note that, the overlapped window slicing and merging operation only happen in temporal attention while introducing no computation overhead to other modules of the U-Net, which benefits the computational efficiency significantly. 3.3 Motion Injection for Multi-Prompt Video Generation Since the aforementioned inference paradigm enables the generation of longer videos, it is natural to explore the potential for synthesizing videos with continuously changing events by utilizing multiple text prompts. This is more challenging because the generation process introduces additional varying factors (i.e., text prompts) that affect the video content mostly. In LDMs, changing a text prompt with only one verb can lead to totally different video content, even with the same initial noises used \cite{Cao2023}. Regarding this, we propose a motion injection strategy to modulate the influence of multiple text prompts on video generation content. The key idea is to generate the whole video with the first prompt at most denoising steps (more correlated to scene layout and appearances) and use the target prompt only at some specific steps (more correlated to object shapes and poses). In VideoLDM, text prompts are taken through the cross-attention mechanism: \[ \tilde{F} = \text{Attn}_{\text{cross}}(\tilde{Q}, \tilde{K}, \tilde{V}), \quad \tilde{Q} = l_{\tilde{Q}}(F_{\text{pre}}), \quad \tilde{K} = l_{\tilde{K}}(P), \quad \tilde{V} = l_{\tilde{V}}(P), \] where \(F_{\text{pre}}\) is the intermediate features of the network, \(P\) is the text embedding by CLIP \cite{Radford2021a} encoder, and \(l_{\tilde{Q}}, l_{\tilde{K}}, l_{\tilde{V}}\) are learned linear layers. According to recent research works \cite{Balaji2022, Cao2023}, LDMs synthesize different levels of visual content—scene layout, shapes of the objects, and fine details, in the early, middle, and late steps of the denoising process respectively. In our scenarios, we expect the overall layout and object appearance to be similar across prompts while the object poses or shapes should follow the target text prompts. To this end, we gradually inject new motion through the cross attention layer during the time steps associated with object shapes, denoted as \([T_\alpha, T_\beta]\). For the sake of simplicity, we present our method in the case of two text prompts: \[ \text{Motion Injection} := \begin{cases} \text{Attn}_{\text{cross}}(\tilde{Q}, l_{\tilde{K}}(\tilde{P}), l_{\tilde{V}}(\tilde{P})), & \text{if } T_\alpha < t < T_\beta \text{ or } l > L, \\ \text{Attn}_{\text{cross}}(\tilde{Q}, l_{\tilde{K}}(P_1), l_{\tilde{V}}(P_1)), & \text{otherwise} \end{cases} \] \[ \tilde{P} = \begin{cases} P_1, & \text{if } n < N_\gamma, \\ P_1 + \frac{n-N_\gamma}{N_\tau-N_\gamma}(P_2 - P_1), & \text{if } N_\gamma \leq n < N_\tau, \\ P_2, & \text{otherwise} \end{cases} \] where \(P_i\) denotes the \(i\)-th prompt; \(\tilde{P}\) denotes the target prompt of motion injection, which depends on the frame index \(n\), and the frames between \([N_\gamma, N_\tau]\) will be assigned with the linearly interpolated embedding to achieve smooth transition; \(l > L\) denotes the last \(L\) cross-attention layers of the U-Net (e.g., the decoder part). It means that the decoder part will always be provided with the target prompt \(\tilde{P}\) across all the denoising steps, because the decoder features are more tightly aligned with the semantic structures as observed in MasaCtrl \cite{Cao2023}. 4 Experiments Setting up. We conduct experiments based on an open-source T2V diffusion model VideoCrafter \cite{Chen2023} for both single-prompt and multi-prompt longer video generations. The video diffusion model is trained on 16 frames and is required to sample 64 frames in the inference stage. The window and stride size are set to \(U = 16, S = 4\) as default. Evaluation Metrics. To evaluate our paradigm, we report Frechet Video Distance (FVD) \cite{Unterthiner2018}, Kernel Video Distance (KVD) \cite{Unterthiner2019} and Clip Similarity (CLIP-SIM) \cite{Radford2021b}. Since the longer inference methods are supposed to keep the quality of the original fixed-length inference, we calculate the FVD between original generated short videos and subset generated longer videos with corresponding lengths. CLIP-SIM is used to measure the content consistency of generated videos by calculating the semantic similarity among adjacent frames of generated videos. Table 1: Quantitative comparison on longer video generation. | Method | FVD (↓) | KVD (↓) | CLIP-SIM (↑) | Inference Time (↓) | |--------|---------|---------|--------------|-------------------| | Direct | 737.61 | 359.11 | 0.9104 | 21.97s | | Sliding| 224.55 | 44.09 | 0.9438 | 36.76s | | GenL | 177.63 | 21.06 | 0.9370 | 77.89s | | Ours | **85.83** | **7.06** | **0.9732** | **25.75s** | Figure 4: Qualitative comparisons of longer video generation. Left prompt: “A chihuahua in astronaut suit floating in space, cinematic lighting, glow effect”. Right prompt: “A very happy fuzzy panda dressed as a chef eating pizza in the New York street food truck”. 4.1 Longer Video Generation We mainly compare our proposed FreeNoise to other tuning-free longer video generation methods with diffusion models. We first directly sample 64 frames (Direct). Then we adopt temporal sliding windows so that the temporal attention module can always process a fixed number of frames (Sliding). The closest work to our paradigm is Gen-L-Video (GenL), which extends the video smoothly by merging some overlapping sub-segments during the denoising process. The synthesis results are shown in Figure 4. In the first line, the dog has severe artifacts and the background of space is not clear. Obviously, directly sampling 64 frames through a model trained on 16 frames will bring poor quality results due to the training-inference gap. When we use temporal sliding windows, the training-inference gap is eliminated thus more vivid videos are generated. However, this operation ignores the long-range visual consistency thus the resulting subject and background both look significantly different among different frames. Gen-L-Video promotes the integration of frames by averaging the overlapping sub-segments and performs better in some cases. However, it fails to maintain long-range visual consistency and suffers from content mutation. Benefiting from noise rescheduling, all sub-segments in our paradigm share similar main subjects and scenarios while still containing considerable content variation, keeping our main content even when the generated video becomes longer. Results shown in Figure 4 exhibit that our FreeNoise successfully renders high-fidelity longer videos, outperforming all other methods. In addition, we also compare the operation time of those methods on NVIDIA A100. As presented in Table 1, it is observed that Gen-L-Video exhibits the longest inference time, nearly four times longer than direct inference. This is primarily attributed to its default setting, which involves the nearly global sampling of the entire set of latents four times. However, our paradigm only brings less than 20% extra inference time by limiting most additional calculations within the temporal attention layers. Table 1 shows quantitative results. The quality of videos generated by direct inference is extremely damaged by the training-inference gap, obtaining the worst FVD and KVD. The video quality from the sliding method and Gen-L-Video is obviously improved but still worse than the results generated by FreeNoise. Our FreeNoise also gains the best CLIP-SIM, indicating the superiority of our method in content consistency. Table 2: User study. Users are required to pick the best one among our proposed FreeNoise with the other baseline methods in terms of content consistency, video quality, and video-text alignment. | Method | Content Consistency | Video Quality | Video-Text Alignment | |----------|---------------------|---------------|----------------------| | Direct | 11.73% | 10.80% | 11.11% | | Sliding | 6.17% | 6.79% | 8.02% | | GenL | 24.38% | 26.85% | 29.63% | | Ours | **57.72%** | **55.56%** | **51.23%** | Figure 5: Qualitative comparisons of multi-prompt video generation. Left multi-prompt: “A camel running on the snow field” → “A camel standing on the snow field”. Right multi-prompt: “An astronaut resting on a horse” → “An astronaut riding a horse”. In addition, we conducted a user study to evaluate our results by human subjective perception. Users are asked to watch the generated videos of all the methods, where each example is displayed in a random order to avoid bias, and then pick up the best one in three evaluation aspects. As shown in Table 2, our approach achieves the highest scores for all aspects: content consistency, video quality, and video-text alignment, outperforming baseline methods by a large margin. Especially for content consistency, our method has received almost twice as many votes as the second place. Multi-prompt Video Generation. We extend our paradigm for multi-prompt video generation by introducing the Motion Injection method. As shown in Figure 5, our method achieves coherent visual coherence and motion continuity: The camel gradually changes from running to standing while the distant mountains remain consistent appearances; The astronaut changes from resting on a horse to riding a horse naturally. However, when we purely use the strategy of noise rescheduling without motion injection, the scene will undergo unexpected changes because a new prompt often introduces unexpected new contents other than the text description due to the inherent properties of the Stable Diffusion model. But it can still work in some cases when the main objects and scenarios are not obviously changed by the new prompt (like the bigfoot case in Figure 5). We also compare with the existing tuning-free state-of-the-art method Gen-L-Video. Figure 5 shows that Gen-L-Video also achieves the conversion of two actions. However, due to the drawbacks of content mutation, its generated objects and scenarios are meaninglessly changed over time. 4.2 Ablation Study Ablation for Noise Rescheduling. As noise rescheduling plays an essential role in our method, we typically conduct an ablation to validate its importance, namely removing it from our proposed inference paradigm. In addition, we also implement another variant of our method with the local noise shuffle unit size as $S = 8$ (the sliding window stride is also changed to 8 accordingly). As shown in Figure 6, without noise rescheduling, our method fails to keep content consistent. Although each frame still matches the text description, they are not semantically connected. And when the sliding window stride is 8, the synthesized features across windows are interacted in a less tight manner. For example, the shape of the bowl is changed gradually in Figure 6. Since the stride value of 4 is able to achieve enough content consistency, we do not consider the smaller stride value of 2, which will bring the double extra inference time. Ablation for Motion Injection. To show the effectiveness of our design choices in Motion Injection, we study on two main hyper-parameters—layer selection and timestep selection, as expressed in Equation 11. In our design, the last $L$ cross-attention layers of the U-Net (e.g., the decoder part) will always be provided with the target prompt $\tilde{P}$ across all the denoising steps and $P_1$ only maintains the layout and visual details through the cross-attention layers before the $L$ in some denoising steps. For comparison, we construct another two variations that $P_1$ control the only decoder part and all layers, respectively. Besides, the third variation allows $P_1$ to control the only decoder part across all the denoising steps. Figure 7 shows the results of those variations. When $\tilde{P}_1$ is only allowed to control the decoder part, the running motion of the horse is suppressed. Meanwhile, the appearance of the horse is changed obviously (shown in Figure 7(a)). When $P_1$ is only allowed to control both the encoder and the decoder part, the appearance of the horse is kept but the running motion is still suppressed (shown in Figure 7(b)). It is because the decoder features are more tightly aligned with the semantic structures, as observed in MasaCtrl (Cao et al., 2023). Compared to layer factors, the choice of timesteps has relatively less influence. Even if we allow $P_1$ to control the only decoder part across all the denoising steps, the horse can still switch smoothly from running to standing. However, this behavior influences the generation of layout and visual details. As a result, a section of the horse’s leg will suddenly disappear (shown in Figure 7(c)), while this missing leg appears bent and curled up in the same frame of our final result (shown in Figure 7(d)). Therefore, the selection of time steps can help to achieve more precise control over the content level. 5 CONCLUSION In this study, we addressed the limitations of current video generation models trained on a limited number of frames and supporting only single-text conditions. We explored the potential of extending text-driven generative models to generate high-fidelity long videos conditioned on multiple texts. Through analyzing the impact of initial noise in video diffusion models, we proposed a tuning-free and time-efficient paradigm to enhance the generative capabilities of pretrained models while maintaining content consistency. Additionally, we introduced a novel motion injection method to support multi-text conditioned video generation. Extensive experiments confirmed the superiority of our paradigm in extending the generative capabilities of video diffusion models. Notably, our method achieved this while incurring only approximately 17% additional time cost, compared to the previous best-performing method that required a 255% extra time cost. 6 ETHICS STATEMENT The primary objective of this project is to empower individuals without specialized expertise to create video art more effectively. Our paradigm, based on the pretrained video diffusion model, assists the model in generating longer videos. It is important to note that the content generated by our tuning-free paradigm remains rooted in the original model. As a result, regulators only need to oversee the original video generation model to ensure adherence to ethical standards, and our algorithm does not introduce any additional ethical concerns. 7 REPRODUCIBILITY STATEMENT We have introduced the algorithm and implementation details in detail in the paper. A researcher familiar with the video diffusion model should be able to basically reproduce our method. In addition, we have implemented our FreeNoise on three advanced video generation models. - VideoCrafter (Chen et al., 2023): https://github.com/AILab-CVC/FreeNoise - AnimateDiff (Guo et al., 2023): https://github.com/arthur-qiu/FreeNoise-AnimateDiff - LaVie (Wang et al., 2023d): https://github.com/arthur-qiu/FreeNoise-LaVie 8 ACKNOWLEDGEMENTS This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-PhD-2022-01-035T), the Ministry of Education, Singapore, under its MOE AcRF Tier 2 (MOE-T2EP20221-0012) and NTU NAP.
zqVvdn0NQM
Using different inputs for the decision tree (DT) model (which uses a three dimensional input consisting of mean (R,G,B) values) and the deep learning (DL) models (which use raw images) does not seem to allow a fair comparison between their explanations.
STOP OVERKILLING SIMPLE TASKS WITH BLACK-BOX MODELS, USE MORE TRANSPARENT MODELS INSTEAD Anonymous authors Paper under double-blind review ABSTRACT The ability of deep learning-based approaches to extract features autonomously from raw data while outperforming traditional methods has led to several breakthroughs in artificial intelligence. However, it is well-known that deep learning models suffer from an intrinsic opacity, making it difficult to explain why they produce specific predictions. This is problematic not only because it hinders debugging but, most importantly, because it negatively affects the perceived trustworthiness of the systems. What is often overlooked is that many relatively simple tasks can be solved efficiently and effectively with data processing strategies paired with traditional models that are inherently more transparent. This work highlights the frequently neglected perspective of using knowledge-based and explainability-driven problem-solving in ML. To support our guidelines, we propose a simple strategy for solving the task of classifying the ripeness of banana crates. This is done by planning explainability and model design together. We showcase how the task can be solved using opaque deep learning models and more transparent strategies. Notably, there is a minimal loss of accuracy but a significant gain in explainability, which is truthful to the model’s inner workings. Additionally, we perform a user study to evaluate the perception of explainability by end users and discuss our findings. 1 INTRODUCTION Over the last decade, Machine Learning (ML) research has been increasingly focused on developing new deep models based on Artificial Neural Networks (ANNs). Such methods have raised the bar in accuracy for numerous cognitive tasks, leading to new and exciting opportunities and serious challenges. Among these challenges, explainability has sparked a vast amount of discourse and debate; briefly, it can be understood as the endogenous process of communicating information about the model and data to foster human understanding of the decision-making process of such models (Rizzo et al., 2022). This is no trivial task, especially for modern deep models, as these are highly complex and rely upon billions of opaque parameters that must be learned during training. Deep Learning (DL) models’ incredible performance paired with their inner opacity constitute a concrete problem. This is because explainability is crucial to verify the model’s properties, such as fairness and trustworthiness, especially in high-stakes decision-making environments. With the ever-growing use of AI in many application fields, policymakers are supporting the need for explainability (Selbst & Powles, 2017). In Europe, for example, a significant effort to regulate models’ decisions by endorsing the user’s right to an explanation is being made in the writing of the AI Act (EU, 2021). Unfortunately, it is clear how this clashes with much of the design process of DL models, which is generally guided by researchers’ intuition, relies on trial and error for tuning and lacks a holistic approach, including the upstream definition of an explanation strategy. Recently, a theoretical framework proposed by Rizzo et al. (2022) has tried to provide common ground for the meaning of keywords used with no real shared purpose in the XAI community. We resort to this framework for our backing definitions of the terms above. To briefly summarise these notions, an explanation is an answer to a why question derived from interpreting some evidence (i.e., factual information). This splits the concept of explanation into two new atomic components, the latter speaking of how much of the model we can explain (i.e., how much our evidence is involved in the model’s computation) and the former telling how the evidence is transformed inside the model. Interpretations are hypotheses; thus, they should be tested for faithfulness (i.e., is the interpretation capturing what the model is doing?) and plausibility (i.e., does the interpretation align with the stakeholders’ intuition of how the model works?). Lastly, the information content of an explanation should be presented to the user through an eXplanation User Interface (XUI) (i.e., text, plots, interactive interfaces, etc.) to verify the effectiveness of the knowledge transfer [Rizzo et al., 2022]. A broad spectrum of methods is now designed for minor accuracy improvements over their predecessors. Unfortunately, the speed at which these are being developed far outpaces the development of strategies capable of explaining them. Research towards explainability methods has nevertheless brought exciting results, with milestone techniques such as SHAP (SHapley Additive exPlanations) [Lundberg & Lee, 2017]. Such a method attempts to explain the prediction of any classifier by using an approach grounded in game theory. While still widely used, it has received criticism, as it may provide explanations that are, at the very least, disputable [Kumar et al., 2020] [Alvarez-Melis & Jaakkola, 2018]. Unfortunately, using one-fit-all methods such as SHAP does not work around the need for a better overall understanding of the designed solution. In this paper, we highlight an approach to problem-solving in ML that draws from often-forgotten simple ML models and applies a modern all-around design and analysis of model explainability. We showcase our strategy with a simple, but hopefully very clear, practical example relevant to the industry. 1.1 Task and Approach To showcase our design strategy, we analyse a straightforward real-world scenario and how the aforementioned concepts of accuracy and explainability affect it. Our target task is the classification of the ripeness of banana crates on a scale from 1 (least ripe) to 4 (ripest) (see Fig. 1 for an example). In our approach, we design for competitive accuracy and explainability simultaneously. To tackle the classification task, we select a pool of three DL methods: (i) a simple Convolutional Neural Network (CNN) model with three convolutional blocks, (ii) a pre-trained convolutional model based on the MobileNetV2 framework [Sandler et al., 2018], and (iii) a pre-trained Vision Transformer (ViT) [Dosovitskiy et al., 2020]. As we will show, the latter allows for almost perfect results and is the best neural model across our proposed methods, but at the cost of no current method capable of explaining its prediction accurately. On the other hand, we show that our approach, based on simple colour features and a fine-tuned Decision Tree (DT), can provide competitive accuracy while exposing the information needed for producing adequate and global explanations. 1.2 Contributions Our experiments show that all three selected neural models can converge to very high (and, in some cases, close-to-perfect) accuracy in a few training epochs. This leads us to question the difficulty of the task at hand and the actual need for such powerful yet black-box methods. As the results suggest, the task is somewhat easy, and it is thus legitimate to tackle it with a simpler strategy. The expectation is that, despite the simplicity, the new model will be able to reach competitive accuracy while leaving us space for integrating explainability into our design. In summary, our contributions are the following: - We provide high-level design guidelines to tackle ML problems with explainability in mind; - We showcase an explicative classification task, for which we provide an analysis of a selection of DL methods in terms of accuracy and explainability, utilising relevant models that offer a wide panoramic of the task; • Moreover, we show that the same classification task can be solved effectively and efficiently by a much simpler and more transparent model, a DT, with minimum feature engineering effort; • We conduct a user study to determine which explanations best suit the stakeholders’ needs; • We release our code and self-collected dataset\(^1\) for reproducibility and possible extendibility of our experiments. 2 RELATED WORK Our work relates to two main paths of research: (i) the advocacy for more focus on explainability in the Artificial Intelligence (AI) community and (ii) the optimisation of fruit ripeness grading. We proceed to briefly introduce previous works on these subjects. AI explainability. A common problem associated with DL models is their inner opacity. Providing meaningful explanations for a DL model’s prediction is an arduous task. Much research has gone towards the extraction of explanations by using, for instance, information from gradients (Selvaraju et al., 2017), attention scores (Bahdanau et al., 2015a), surrogate models (Ribeiro et al., 2016), and latent prototypes (Chen et al., 2019). Some methods have been proposed with the promise of being model-agnostic, i.e., to explain the prediction of any classifier. Prominent examples are the aforementioned LIME and SHAP methods (Ribeiro et al., 2016; Lundberg & Lee, 2017). However, the proposed explanations have been challenged (e.g., (Adebayo et al., 2018; Serrano & Smith, 2019; Garreau & Mardouli, 2021; Nauta et al., 2021; Khakzar et al., 2022)) and proved to be unreliable in multiple scenarios. Nevertheless, SHAP is still considered state-of-the-art for explainability by many. Moreover, it allows the combination of multiple local explanations to produce a “global” (averaged) explanation of the model instead of the local explanation of a single prediction. Similarly, our proposed explanation strategy is global. For these reasons, we selected SHAP as our benchmark strategy to explain the DL models and to compare them against our proposed solution. Fruit ripeness recognition. Grading the ripeness of the fruit is a long-studied problem for whom strategies based on statistics (e.g., (Mendoza & Aguilera, 2006; Olarewaju et al., 2016)), traditional ML (e.g., (Ni et al., 2020; Septiarini et al., 2020)), and DL (e.g., (Saranya et al., 2022; Sa et al., 2016)) have been proposed. The top-performing methods are those based on DL, which, aside from reaching astonishing accuracy, do away with the complex and error-prone task of feature engineering. However, the literature lacks extensive comparisons among the three noted strategies. For more information on the fruit ripeness grading problem and solutions, a recent survey was authored by Rizzo et al. (2023). On another note, the focus of much of the most recent research appears to be on scraping a few decimals of task accuracy (or other performance indicators) while too often not accounting for explainability (Marcuzzo et al., 2022). On this topic, works such as the one by Rudin (2019) discuss the necessity of more carefully gauging the tasks being solved and, whenever possible (or necessary due to high stakes), using more transparent models rather than black boxes. Our thesis is similar: we advocate choosing the most simple and transparent model that achieves a satisfying performance while also devising a strategy to faithfully explain its behaviour. 3 DESIGNING FOR EXPLAINABILITY Our proposed guidelines aim to find the problem features that are more intuitive for the stakeholders and process them as little as possible through the simplest ML method adequate for the task. “Simplicity”, in this case, relates to the number of parameters regulating the model (the lower, the better) and its reliance on human-understandable processing of the features (the more, the better). In particular, we want to produce a pipeline from raw data to prediction, where each step is as transparent as possible. The proposed design process follows these high-level steps: (i) understand the task to be solved by the ML method, the available data, and the stakeholders of the final product; \(^1\) Will be made available after anonymous review. (ii) for each stakeholder, discuss which attributes they consider relevant in solving the task and define which features can be considered part of an explanation; (iii) find a ML model that is powerful enough to process the features but also offers the possibility to extract interesting evidence with a reasonable effort. The evidence must suggest an interpretation that is faithful by design to how the model works and possibly aligns with human intuition for plausibility (Rizzo et al., 2022); (iv) test model performance and effectiveness of the generated explanations: the model should provide competitive accuracy with the state-of-the-art, while also satisfying the expectations of the stakeholders with the produced explanations. We find that a user study is an effective way to get qualitative evidence of the efficacy of the proposed XUI. Step (ii) is perhaps the most challenging point, especially when very little problem-specific knowledge is available to the stakeholders. In this scenario, a preliminary analysis of the performance of top black-box models can indicate how hard the task is. If the specific task exposes intuitive features that can be leveraged to solve it, a model that tends towards transparency is worth trying. Intuitiveness is critical to optimising the design and reaching a final explanation faithful to the model behaviour and plausible to the human stakeholder. On the other hand, we acknowledge that finding meaningful features or even just effective data representation can be challenging for some tasks. In Natural Language Processing, for example, handcrafting general context-sensitive and human-understandable features is often very difficult or impractical, partly due to the inherent complexity of languages. That is why we advocate reasoning about an ML problem and try a broader explainability-driven approach, especially when the task is simple. For some tasks, simple or explainable solutions may not be there yet. The following sections showcase how we applied such guidelines to our example task. 3.1 Task definition, stakeholders, and data From a practical perspective, this work deals with a multiclass image classification task. Our stakeholders are workers at the wholesale fruit market of the city of Treviso, Italy, who are interested in automating the ripeness grading of banana bunches. Currently, bunches are manually labelled by operators on an increasing ripeness value (1 to 4, least to most ripe, see Fig. 1). All the bananas within a crate are assumed to be in the same ripeness stage. The ML classifier resulting from this work would be used to aid operators in labelling large numbers of incoming crates. Moreover, this is the first step in the process of digitalization of the fruit processing pipeline, from inspection and assessment of fruit quality to online sales. Given the impact of the assessment step on the pricing of fruit, our stakeholders stressed the importance of maintaining transparency in the grading process, to allow human supervision. To develop the ML solution, we collected an ad-hoc dataset comprising 927 images, with a reasonable balance between the four ripeness classes. The dataset was manually labelled by the operators that perform the quality assessment of incoming products. To understand human performance on this classification task, we also asked three operators to re-label a subset of images from the dataset. More technical details on the data are provided in Section 4.1, while the human performance is reported in Section 5. 3.2 Feature selection After consultation with the stakeholders, we determined that colour is the most reliable and intuitive factor in determining the ripeness of banana bunches. Images are encoded using the well-known RGB colour space, a well-known colour model backed up by solid theory based on the human perception of colours. Since colour is the most important feature of our dataset, we process images such as to precisely extract valuable colour information and train our classifier to recognise the ripeness stage by considering such colour. Section 4.1 details how this information is extracted and used in the proposed solution. 3.3 On the choice of models We select both state-of-the-art DL-based methods and simpler, more transparent classifiers for this task. Testing DL models gives us an idea of the best performance that can be achieved, as well as the difficulty of the problem. As stated previously, our objective is to choose the model of the lowest complexity that achieves adequate performance, as to preserve as much transparency as possible. We selected a DT, a Support Vector Machine (SVM) with different kernels and a multinomial Naive Bayes (NB) classifier as baseline models for comparison, eventually choosing the DT as the best model of the three. We point out that the DT learns discriminative rules that partition the feature space into sub-spaces corresponding to each target class (i.e., the ripeness stage). By extracting colour information in the RGB space and limiting the number of extracted features we can obtain a global explanation mapping each ripeness stage to specific areas of the colour space. We highlight that this explanation is faithful, in the sense that it describes the DT “reasoning” process, as well as plausible, meaning that it is aligned with the human understanding of the problem. These characteristics make this strategy effective with respect to the point (iii) in our guidelines. 3.4 TESTING FOR ACCURACY AND EXPLAINABILITY We compare the performance of the baseline models (DT, SVM, NB) and select the DT to be the best compromise between the complexity and intuitiveness of the explanation that can be derived from it, as discussed in the previous section. The NB classifier achieves lower performance than the DT. Conversely, the SVM with a high-degree polynomial kernel achieved slightly better results (less than 0.5% accuracy and F1-score improvement). However, given the minimal difference in results and considering that the decision boundaries of the SVM are more difficult to understand because of their complexity, the DT appears to be a better choice. The complete results of these tests are reported in the supplementary material. Additionally, we compare the DT with some state-of-the-art DL models that would be the obvious off-the-shelf DL solutions for this task. Results are reported in Section 5, showcasing that the DT achieves competitive performance, and is well above human classification performance. Finally, we want to assess the efficacy of the generated explanations for our stakeholders. To do this, we conducted a user study to investigate the users’ preferences about the generated explanations. More details on the results are provided in Section 5.2 while the complete questionnaire is reported in the Supplemental materials. 4 METHODS AND EXPLANATIONS 4.1 DATA PROCESSING Our dataset is composed of 927 RGB pictures of crates filled with various bunches of bananas. Images were acquired at a native resolution of 4160 x 3120 pixels using a CZUR Shine Ultra scanner, with an effort to achieve consistent lighting. The dataset is split among classes in a reasonably balanced way. We detail the class distribution in the supplementary material. Each image was resized to 224 x 224 pixels to obtain a reasonable inference time. This resizing was also chosen as it is standard in many pre-trained models, allowing us to use modern transfer learning approaches easily. The dataset was augmented with random transformations, including rotation, affine transforms, elastic morphology transforms, random location crop, gaussian blur, as well as the erasure of patches of the image and changes in perspective. The latter transformations were applied to account for different angles and accidental occlusions, likely when non-expert users take pictures with a smartphone (which is one potential end use of this classifier). We augmented roughly 50% of the dataset, and the new images were added to the original dataset before training. More details may be found in the supplementary material. A visual inspection of the dataset reveals that pictures are noisy in that parts of the crate are captured in the overall image (mostly the boundaries of the crate and, sometimes, its bottom). To circumvent this problem, we perform semantic segmentation of the images to filter out the background of banana bunches. Having no manually segmented images, an unsupervised approach was the only feasible way to achieve this. After testing several algorithms, we selected the SLIC algorithm [Achanta et al., 2012] for this task. In our experiments, all methods benefit from including segmentation as a pre-processing step. As such, we only report results on segmented images. Moreover, while the selected DL models can automatically extract discriminative features from raw RGB input images, we devised a minimal feature engineering process to extract colour features and use them with the DT. Specifically, each image is represented by three features: the R, G and B channel values of their average colour, normalised in the $[0 - 1]$ range. This way, all the image structure is discarded, leaving only colour information. One notable thing to consider about RGB is how the luminance is embedded within its three channels. This is different, for instance, from colour spaces such as YUV, where luminance is encoded into the physical linear-space brightness (Y) channel. As the DT is based on average colour values, this method benefits from normalising the luminance. This is achieved by transposing all the images to the YUV colour space, setting the Y channel to a common value, and then translating back to RGB. 4.2 Deep Learning Approach In addressing the task of banana ripeness classification, we run and compare three neural approaches. The first architecture consists of a simple CNN using three convolutional blocks, each characterised by two bi-dimensional convolutions and max pooling interleaved by ReLU activation functions. The convolutional layers extract features fed to a three-layer feed-forward ANN, which outputs the final prediction. Before being processed by the CNN, the data is normalised to mean and standard deviation. The second architecture we consider is the pre-trained MobileNetV2 network (Sandler et al., 2018). Still convolutional by nature, the strategy at the core of this method is based on depth-wise convolutions (Sifre & Mallat, 2014; Chollet, 2017) and inverted residual connections. The designers aimed to build a powerful, pre-trainable model for low-tier devices. The third architecture we examine is the Vision Transformer (ViT) (Dosovitskiy et al., 2020). Transformers (Vaswani et al., 2017) are neural architectures based on multi-head attention (Bahdanau et al., 2015b), widely studied and employed by the NLP community (Gasparetto et al., 2022a,b). This architecture has seen recent applications to CV tasks with various strategies (see Khan et al., 2022 for a survey). Briefly, ViT splits images into fixed-size patches and linearly embeds them. Positional embeddings are then added to retain position information before feeding the resulting sequence of vectors to a standard Transformer encoder. Classification is achieved by adding a learnable “classification token” to the sequence. Deep Learning Explainability Strategy As previously mentioned, we used SHAP (Lundberg & Lee, 2017) to explain the predictions of the DL models. When dealing with images, SHAP allows generating heat maps (which constitute the XUI) to deliver the explanation to the user. These are supposed to describe the importance of each pixel in the image toward the model’s prediction. Intuitively, warm colours indicate the regions of the image that contributed the most to the prediction. In contrast, colder colours indicate areas that contributed negatively to the prediction of the same class. Example explanations generated with SHAP are presented in Fig. 2. Previous literature found that SHAP, despite being widely used, produces explanations lacking faithfulness while looking plausible (Rizzo et al., 2022; Alvarez-Melis & Jaakkola, 2018). This is an alarming condition where the explanations convey to the user “a convincing lie” about how the model behaves. The following sections show how our design addresses faithfulness and plausibility. Figure 3: Explanation generated from the constraints imposed by the DT on the RGB colour gamut. The four grades identify different areas within the gamut. 4.3 Decision Tree In contrast to the examined DL methods’ inner complexity, we propose tackling the same task using a simple, more transparent model based on a DT classifier. In particular, we adopt the implementation offered by scikit-learn, which is based on the CART algorithm (Breiman, 1984). Explainability Strategy One may argue that a DT is an intrinsically explainable model. We argue that there is no such thing as intrinsic explainability: a transparent model still needs to provide some explanation that is somewhat understandable to the users and answers their “why” questions. Different end-users are likely to have different requirements for explainability. For example, ML experts may be satisfied with understanding the range of feature values mapped to each target class (in our case, the RGB values). Non-expert users may need these rules to be further processed to be represented more clearly. Serving explainability is intuitively much easier with specific models, such as those regulated by a few parameters, though this is yet to be formalised in the literature. Admittedly, a DT has a very intuitive and faithful interpretation: for every non-leaf node, the DT learns a threshold value for one of its given features, thus producing two children (above and below the threshold). In our case, each instance is classified by following a path to a leaf labelled with a specific ripeness value. Conveniently, the set of rules given by the traversed path defines an area within the RGB colour space that is part of our explanation. Binding the explanation to the intuitive process of discriminating banana crates based on colour (as our stakeholders do) sets the premises for it to be plausible. Albeit simple to follow for relatively shallow trees, the decision paths can grow exponentially for features that have complex interactions. As anticipated, such numerical features split within the DT can still appear opaque to the average user. Thus, we take our explanation further by devising an XUI that aims to be human-understandable and tested accordingly. More specifically, we use the rules extracted from the decision path as constraints on the RGB gamut to identify portions of such a space representing the four ripeness classes. Hence, it is easy to represent each unknown input data point as its average colour in the 3D RGB colour space and determine which region it belongs to. This plot is our proposed explanation for the DT’s behaviour. Fig. 3 is an example visualisation of the whole process (more examples are reported in the supplementary material). It’s worth stressing that the area of the colour space extracted from the decision rules learned by the DT is, by definition, a global explanation. As such, our strategy allows us to unequivocally understand which colours are associated with each label class. One of the benefits of such an interpretable explanation is the ability to validate the classifier’s behaviour. Unexpected colours would show up in the proposed XUI, pointing out a negative bias in the model. 5 Experiments In this section, we compare the performance achieved by our employed methods. First, we analyse the classification metrics achieved by the three DL-based models and the DT. Then, we study the explanations generated according to the strategies proposed in Section 4.2 and Section 4.3 and compare | | Accuracy | Precision | Recall | F1 | |------------------|----------------|----------------|----------------|-----------------| | Decision Tree | 0.9716 (± .0104) | 0.9723 (± .0106) | 0.9678 (± .0119) | 0.9697 (± .0110) | | CNN | 0.9349 (± .0115) | 0.9298 (± .0131) | 0.9308 (± .0123) | 0.9377 (± .0123) | | MobileNet V2 | 0.9743 (± .0046) | 0.9726 (± .0046) | 0.9717 (± .0054) | 0.9718 (± .0049) | | ViT | 0.9967 (± .0015) | 0.9960 (± .0020) | 0.9966 (± .0017) | 0.9962 (± .0018) | | Human Performance| 0.7500 (± .0589) | 0.7588 (± .0453) | 0.7500 (± .0589) | 0.7519 (± .0524) | Table 1: Macro-averaged performance metrics for the models averaged over ten random seeds (standard deviation in brackets). them through a user study involving the stakeholders for the task of banana ripeness classification in a real fruit market. 5.1 PERFORMANCE To measure the ability of our selected models to produce correct predictions, we resort to commonly used classification metrics: accuracy, macro-averaged precision, macro-averaged recall, and macro-averaged F1-score. All methods are tested using 5-fold cross-validation, repeated ten times with different random seeds to strengthen the results. Table 1 showcases the results achieved with both deep-learning methods and the DT. We additionally report the human performance, which is the average of the scores obtained by three stakeholders on the classification of a balanced dataset of 300 randomly sampled images from the original non-augmented dataset (~ 20%). It is easy to see that all methods achieve excellent results, with all metrics surpassing the 90s percentile scores and improving on a human baseline. It is worth remembering that these results are achieved on the datasets augmented with images that have gone through various augmentations, which makes them more robust at the cost of small decreases in performance. Further detailed in the supplementary material, error analysis reveals that mistakes always occur because the classifiers select an adjacent class (e.g., class 2 instead of 1). The ViT model achieves a near-perfect score among the selected methods for all metrics. The DT also obtained outstanding results, though this required comparatively more effort (including the standardisation of the luminance and the extensive grid search). Nevertheless, this process allows the DT to have results comparable to those of MobileNetV2. 5.2 EXPLAINABILITY We compare the SHAP explanations for the DL models with the handcrafted explanations based on RGB colour designed for the DT. Fig. 2 and 3 compare the two types of explanations for the same input. It is easy to see that the masks produced by SHAP do not highlight meaningful features of the image. Indeed, we can observe that the regions highlighted are apparently random. Not only that, in our case, SHAP’s visualisation for the CNN always presented the same result for all classes, seemingly valuing features for grade 4 highly (even when the CNN correctly classified other ripeness stages). The situation does not change when we visually examine the explanations generated by the methods throughout the whole dataset. This does not necessarily mean that the explanations generated by SHAP are not faithful to the model’s inner workings. Rather, our intuitive interpretation of the highlighted regions is misaligned with how the model uses those features internally. As such, we can only conclude that, despite their plausibility, these visualisations are inadequate as significant explanations. Conversely, our explanation for the DT is faithful to the model’s inner workings by design. This strategy provides the user with a much more informative explanation that is intuitively understandable, plausible and faithful to how the model works. An ad hoc user study confirms such results. USER STUDY We designed a user study to investigate the users’ preferences about the generated explanations for the model predictions. The users involved in the study are stakeholders in the grading of banana ripeness, consisting of 20 people with different backgrounds and expertise with artificial intelligence tools. We submitted an online questionnaire to each user. The complete questionnaire is reported in the supplementary material. The questionnaire introduces the task and asks the users to compare two types of explanations for the same input and prediction: (i) the mask generated by SHAP and (ii) the representation of the input colour in the RGB gamut. Explanations (i) pertain to the ViT model (the best-performing one), while explanation (ii) is generated from the DT. The object of the comparison is how much the proposed explanation allows you to answer why the model made that prediction. When asked about the importance of explaining the model’s behaviour, all participants believed that an associated explanation is somewhat necessary, with most thinking it to be essential. As for the preferred explanation method, ten out of twenty respondents considered the RGB gamut area produced by the DT to be the most effective, eight voted for the SHAP heatmap explanation, and three declared that no explanation was helpful to them. This result is certainly interesting; though SHAP’s visualisations do not provide an unambiguous explanation, their visual nature was still enough to make half of the participants deem them trustworthy in conveying why the prediction was made. Finally, 80% of respondents declared that the chosen explanation would improve their trust in the model, and 70% are ready to trade about 5% of the classifier accuracy for a more transparent and human-explainable decision process. Considering that the accuracy loss between the DT and the most accurate model is only around 2.5% for our classification task and well above human performance, there appears to be little reason to prefer the latter to the more explainable one. We report the complete results of our study as supplementary material. 6 FUTURE WORK Using simple classifiers on a few manually extracted features can be much more problematic on more complex tasks, as this could severely limit the performance of the models. Indeed, we do not make the point that more transparent models should always be used: many cognitive tasks would be nearly impossible without the progress obtained through DL. For this specific task, we selected a simple strategy to provide an intuitive explanation to non-ML-expert users based on the average colour of the whole image. This can be refined iteratively to incorporate more complex features while accounting for explainability. We plan to explore strategies to serve explanations using higher numbers of features, for example, considering the pixel colour distribution. Moreover, in line with the explainability by design principle, we plan to research the usage of regularisation strategies to improve the explainability of complex DL models. This topic has already been explored (Wu et al., 2018), mostly tackling the problem of robustness, which has indeed been linked to the issue of explainability (Ross & Doshi-Velez, 2018). It would be interesting to explore whether and how adding constraints on the features extracted by NNs could help produce more understandable explanations by the end-users. 7 CONCLUSIONS This paper discusses the explainability of ML models by providing high-level guidelines to tackle ML problems. As an example, we compare three DL models to a DT for classifying bananas into four ripeness stages. While the DT leads to slightly lower accuracy scores, it produces much more interpretable results. This task showcases how an intuitive explanation strategy can be devised by model design rather than with a post-hoc approach. We argue that working with a more transparent model and stakeholder-understandable features, where possible, can allow for satisfactory explanations with minimal loss in accuracy. To validate our claim, we conducted a pilot user study on 20 users, comparing the explanations produced by SHAP, a popular model-agnostic explainability method for DL models, against those produced by combining colour features and DT rule interpretations. The study results indicate users’ tendency to accept minor accuracy losses, favouring a more understandable model. However, they also showcase how non-expert users prefer more straightforward explanations, regardless of whether they are well-founded. --- 2One participant selected both the RGB explanation and the “neither” option. REFERENCES Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, and Sabine Süsstrunk. Slic superpixels compared to state-of-the-art superpixel methods. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 34(11):2274–2282, 2012. doi: 10.1109/TPAMI.2012.120. Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. Sanity checks for saliency maps. In *Proceedings of the 32nd International Conference on Neural Information Processing Systems*, NIPS’18, pp. 9525–9536, Red Hook, NY, USA, December 2018. Curran Associates Inc. David Alvarez-Melis and Tommi S. Jaakkola. On the robustness of interpretability methods, 2018. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In Yoshua Bengio and Yann LeCun (eds.), *3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings*, 2015a. URL http://arxiv.org/abs/1409.0473 Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In Yoshua Bengio and Yann LeCun (eds.), *3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings*, 2015b. URL http://arxiv.org/abs/1409.0473 Leo Breiman. *Classification and Regression Trees*. Routledge, New York, 1984. doi: https://doi.org/10.1201/9781315139470. Chaofan Chen, Oscar Li, Chaofan Tao, Alina Jade Barnett, Jonathan Su, and Cynthia Rudin. *This Looks like That: Deep Learning for Interpretable Image Recognition*. Curran Associates Inc., Red Hook, NY, USA, 2019. Francois Chollet. Xception: Deep learning with depthwise separable convolutions. In *2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*. IEEE, July 2017. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. *CoRR*, abs/2010.11929, 2020. URL https://arxiv.org/abs/2010.11929 EU. Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS. 2021. URL https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 Damien Garreau and Dina Mardaoui. What does lime really see in images? In *ICML 2021 - 38th International Conference on Machine Learning*, Virtual Conference, United States, July 2021. URL https://hal.science/hal-03233014 Andrea Gasparetto, Matteo Marcuzzo, Alessandro Zangari, and Andrea Albarelli. A survey on text classification algorithms: From text to predictions. *Information*, 13(2), 2022a. ISSN 2078–2489. doi: 10.3390/info13020083. URL https://www.mdpi.com/2078-2489/13/2/783 Andrea Gasparetto, Alessandro Zangari, Matteo Marcuzzo, and Andrea Albarelli. A survey on text classification: Practical perspectives on the italian language. *PLOS ONE*, 17(7):1–6, 07 2022b. doi: 10.1371/journal.pone.0270904. URL https://doi.org/10.1371/journal.pone.0270904 Ashkan Khakzar, Pedram Khorsandi, Rozhin Nobahari, and Nassir Navab. Do explanations explain? model knows best. In *2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 10234–10243, 2022. doi: 10.1109/CVPR52688.2022.01000.
AJBkfwXh3u
Additionally, the computational intensity of introducing temporal masks, which could be exacerbated by the incorporation of contrastive learning (VGAE is known to be computationally demanding), is not addressed.
Causality-Inspired Spatial-Temporal Explanations for Dynamic Graph Neural Networks Kesen Zhao City University of Hong Kong Hong Kong, China kesenzhao2-c@my.cityu.edu.hk Liang Zhang * Shenzhen Research Institute of Big Data Guangdong, China zhangliang@sribd.cn Abstract Dynamic Graph Neural Networks (DyGNNs) have gained significant popularity in the research of dynamic graphs, but are limited by the low transparency, such that human-understandable insights can hardly be drawn from their predictions. Although a number of existing research have been devoted to investigating the interpretability of graph neural networks (GNNs), achieving the interpretability of DyGNNs is pivotally challenging due to the complex spatial-temporal correlations in dynamic graphs. To this end, we propose an innovative causality-inspired generative model based on structural causal model (SCM), which explores the underlying philosophies of DyGNN predictions by identifying the trivial, static, and dynamic causal relationships. To reach this goal, two critical tasks need to be accomplished including (1) disentangling the complex causal relationships, and (2) fitting the spatial-temporal explanations of DyGNNs in the SCM architecture. To tackle these challenges, the proposed method incorporates a contrastive learning module to disentangle trivial and causal relationships, and a dynamic correlating module to disentangle dynamic and static causal relationships, respectively. A dynamic VGAE-based framework is further developed, which generates causal-and-dynamic masks for spatial interpretability, and recognizes dynamic relationships along the time horizon through causal invention for temporal interpretability. Comprehensive experiments have been conducted on both synthetic and real-world datasets, where our approach yields substantial improvements, thereby demonstrating significant superiority. 1 Introduction Dynamic graphs play a crucial role across a wide spectrum of real-world applications (Seo et al., 2018; You et al., 2018), including financial networks (Nascimento et al., 2021; Zhang et al., 2021), social networks (Berger-Wolf & Saia, 2006; Greene et al., 2010), and traffic networks (Peng et al., 2021; 2020). Unlike the widely studied static graphs, dynamic graphs can represent the spatial-temporal characteristics of real-world data, which is gaining great popularity in practical scenarios despite the high complexity (Pareja et al., 2020). Addressing the challenges posed by this complexity has led to the development of Dynamic Graph Neural Networks (DyGNNs) (Wang et al., 2023; Manessi et al., 2020; Beck et al., 2017; Zaki et al., 2016), which achieves significant advances in predictive tasks through accommodating the intricate interplay of spatial-temporal patterns. Despite the aforementioned advantages, DyGNNS are usually limited by low transparency, such that human-understandable insights can hardly be drawn from their predictions. Existing works on the explanation of GNNs, such as GNNExplainer (Ying et al., 2019), XGNN (Yuan et al., 2020), and OrphicX (Lin et al., 2022) primarily focus on static networks. Therefore, these methods can hardly be directly adopted for the interpretability of dynamic networks due to the following two challenges induced by the complex spatial-temporal correlations in dynamic graphs. 1) Spatial interpretability. The investigation of spatial interpretability critically relies on the extraction of subgraphs that can represent the characteristics of the complete graph in spatial dimension and elucidating outcomes in subsequent tasks. In essence, these subgraphs serve as substitutes for the... original graphs, enabling the attainment of analogous results in downstream tasks. However, the sub-graph partition depends on the historical evolution over time as well as the spatial topology of the graph, which cannot be appropriately handled by the conventional methods designed for static graphs. 2) Temporal interpretability. Temporal interpretability relies on the importance of representative sub-graphs over the time slots. In essence, it’s essential to elucidate the significance of each time step concerning its impact on the outcomes of subsequent tasks. However, the temporal dynamics of a node are also impacted by its topological neighbor apart from its historical states. This makes it infeasible to directly adopt the techniques for time series research. While causal inference provides an effective framework for investigating the interpretability in graph structures, a critical task to tackle these challenges is to disentangle the complicated types of relationships and accommodate them individually. To reach this goal, we propose an innovative causality-inspired spatial-temporal generative model via constructing the structural causal model (SCM) (Pearl, 2009) for dynamic graphs. In this line of research, previous works for static graphs categorize the relationships within the graph into trivial (or spurious) relationship and causal relationship (Lin et al., 2021; 2022), where trivial relationship captures the dispensable graph information while the causal relationship exploits the key information for tasks. Inspired by these works, we further divide the causal relationships in the SCM into static relationship and dynamic relationship to capture the complex spatial-temporal correlations. However, there remain two critical challenges to implementing such a SCM. The first challenge lies in the approach to disentangling the complex causal relationships as no explicit information is available for the identification of trivial, dynamic, and static relationships. The second challenge is the way to construct the SCM to fit the task of discovering spatial-temporal interpretability due to the lack of existing model for dynamic graphs. To address these problems, we present a novel Dynamic GNN Explainer. Specifically, to disentangle the trivial relationship and the causal relationship, we propose a contrastive learning module to ensure the semantic similarity between the causal relationship and the original graph while enlarging the semantic distance between the causal relationship and the trivial relationship. To disentangle the dynamic relationship and the static relationship, we leverage the pre-trained target DyGNN model to guarantee the essential temporal correlation between neighboring subgraphs for the dynamic relationships and identify the rest independent causal information to the static relationships. We instantiate our DyGNN explainer with a dynamic variational graph auto-encoder (VGAE) framework, which extracts the causal and dynamic relationships and maps them into a causal adjacency mask and a dynamic adjacency mask to accomplish the spatial explanation of graph-truth label. We further disentangle dynamic relationships along the time horizon by treating each subgraph as a causal invention and leverage the pre-trained target DyGNN model to measure its causal effect as the temporal explanation. To showcase the effectiveness of our approach, we generate synthetic dynamic datasets tailored for dynamic graph interpretability tasks, which fill the blank of dataset benchmarks and would facilitate further research in this domain. Experiments on both synthetic datasets and real-world datasets demonstrate superior performance of our method in both explanation tasks and real predictions. The code and the dataset benchmarks are available.\footnote{https://github.com/kesenzhao/DyGNNExplainer} 2 Method 2.1 Problem Statement We denote a pre-trained target DyGNN model to be explained as $f = f_d \circ f_a$, where $f_a : G_{1:T} \rightarrow R$ is the aggregation function of DyGNN to capture temporal structures and feature patterns, $G_{1:T}$ is the dynamic graph sets, $T$ is the total number of time steps, $R$ is the aggregated high dimensional graph representation. $f_d : R \rightarrow Y$ is the downstream task function, which transforms graph representation to label space. $Y$ is the final label prediction. Specifically, the input dynamic graph at the $t^{th}$ time step $G_t = (X_t, A_t), t \in [1, T]$ includes the node attribute matrix $X_t \in \mathbb{R}^{V \times D}$ and corresponding adjacency matrix $A_t \in \mathbb{R}^{V \times V}$, where $V$ is the set of nodes, $D$ is the dimension of node attribute. Explanation methods for DyGNNs aim to meet two critical criteria: fidelity and interpretability. Fidelity requires that a faithful explanation, represented by dynamic subgraphs, should align with how the target DyGNN behaves in the vicinity of the given dynamic graphs of interest (Ribeiro et al., 2016). In other words, when we feed the explanatory dynamic subgraphs to the target DyGNN, the outcomes should closely resemble those obtained from the original dynamic graphs. Interpretability (Pope et al., 2019) demands that generated explanations should highlight the most important parts of the input while disregarding the irrelevant components. Explanations for DyGNNs require both spatial interpretability and temporal interpretability. The spatial interpretability highlights the most important parts of graphs while temporal interpretability identifies the most important time slices. Moreover, an explainer should be versatile enough to explain any black-box model, adhering to the principle of being ‘model-agnostic’. Hence, our ultimate objective is to define a generative model, denoted as $\mathcal{F}$, to act as an explainer. This model should have the capability to pick out which aspects of the input contribute to the DyGNN prediction while meeting the fidelity and interpretability criteria outlined above. In line with prior research (Lin et al., 2021; Yuan et al., 2020; Lin et al., 2022), our focus lies in providing spatial-temporal explanations (dynamic subgraph set) for dynamic graph structures. We operate under the black-box setting, wherein we lack information regarding the ground-truth labels of the input graphs and do not require the inner workings of the target DyGNN’s output generation process. ### 2.2 Framework Overview In this study, we propose a novel causality-inspired spatial-temporal DyGNNExplainer, shown in Figure 1. We first construct a Structural Causal Model (SCM) for dynamic graphs with trivial, dynamic, and static relationships, enabling a comprehensive understanding of dynamic graphs. Then we generate causal and dynamic soft masks, which enable backdoor adjustment, to intervene in the targeting causal and dynamic factors. After that, we employ a contrastive loss to separate trivial and causal relationships. To disentangle static and dynamic relationships, we employ a dynamic loss, which captures the temporal evolution within graphs and maintains independence from static information. Finally, we improve the generated static and dynamic explanations using prediction loss and sparsity loss, which enhances both prediction accuracy and spatial interpretability. ### 2.3 A Causal View on DyGNNs We first take a casual look at the DyGNN modeling and construct a Structural Causal Model (SCM) in Figure 1. It presents the causalities among seven variables: dynamic graph data $G$, trivial relationship $T$, causal relationship $C$, dynamic relationship $D$, static relationship $S$, node representation $R$, and prediction $Y$, where links between variables represent cause-effect relationships. Here are some key explanations regarding SCM: - $T \leftarrow G \rightarrow C$: $C$ represents the genuine causal relationship in dynamic graph data, while $T$ signifies trivial relationships, often stemming from data biases or spurious patterns. - $T \rightarrow R \leftarrow C$: $R$ is a high-dimensional representation of dynamic graph node data $G$. To generate $R$, the conventional strategy leverages both the trivial relationship $T$ and the causal relationship $C$ as inputs to extract discriminative information. - $D \rightarrow R \leftarrow S$: In dynamic graph $G$, causal relationships $C$ consist of dynamic relationship $D$ and static relationship $S$. - $R \rightarrow Y$: The ultimate aim of dynamic graph representation learning is to predict graph properties, such as node or graph label. From this SCM, we identify a backdoor path between \( C \) and \( Y \), i.e., \( C \leftarrow G \rightarrow T \rightarrow R \rightarrow Y \). In this path, the trivial relationship \( T \) acts as a confounder between \( G \) and \( Y \). Even if there’s no direct link between \( C \) and \( Y \), the backdoor path can cause a misleading correlation between \( C \) and \( Y \), leading to incorrect predictions. Thus, it’s crucial to block backdoor path to enable dynamic GNNs to effectively utilize causal relationships. Similarly, we have another backdoor path between \( D \) and \( Y \), i.e., \( D \leftarrow C \rightarrow S \rightarrow R \rightarrow Y \), and the static relationship \( S \) acts as confounder between \( C \) and \( Y \). 2.4 Backdoor Adjustment Our research has emphasized the significance of safeguarding DyGNNs against confounding factors and distinguishing between dynamic and static relationships. This distinction is crucial for effectively utilizing causal relationships within dynamic graphs. Instead of modeling the confounded relationship denoted as \( P(Y|C) \) in Figure 1, our focus shifts towards graph representation learning that eliminates the backdoor path. Fortunately, we can achieve this by applying do-calculus principles to the variable \( T \). By doing so, we can estimate the probability distribution \( P(Y|do(C)) \) without interference from the confounder \( T \), utilizing standard backdoor adjustment. Similarly, we apply do-calculus to the variable \( D \) and estimate \( P(Y|do(D)) \) to eliminate the backdoor path caused by the confounder \( S \). Note that we can’t directly employ the standard backdoor adjustment method due to confounder \( T \). To overcome this challenge, we merge the estimation of \( P(Y|do(C)) \) with that of \( P(Y|do(D)) \), resulting in the following equations: \[ P(Y|do(D)) = \sum P(Y|do(C))P(S) = \sum P(S) \sum P(Y|G)P(T). \] The first and second equations are based on the backdoor adjustment for confounder \( S \) and \( T \) respectively. Detailed derivation process is shown in Appendix A.4. However, there exist challenges for implementing backdoor adjustment. No explicit information is available for the identification of trivial, dynamic, and static relationships. To tackle the above challenges, we propose an effective method shown in the next subsection. 2.5 Disentangling Complex Causal Relationships Given a dynamic graph set \( G_{1:T} = (X_{1:T}, A_{1:T}) \), we formulate the causal soft masks for causal relationship at \( t^{th} \) time step as \( M^C_t \in \mathbb{R}^{V \times V} \). Each element of the mask represents an attention score, typically falling within the range of \([0, 1]\). For arbitrary mask \( M \), we define \( \bar{M} = 1 - M \) as its complementary mask, where 1 is the all-one matrix. Consequently, we partition the entire dynamic graph set \( G_{1:T} \) into two distinct sets: causal set \( G^C_{1:T} = (X_{1:T}, A_{1:T} \oplus M^C_{1:T}) \) and trivial set \( G^T_{1:T} = (X_{1:T}, A_{1:T} \oplus \bar{M}^C_{1:T}) \). Where \( \oplus \) is the element-wise dot product at each time step. Similarly, we formulate the dynamic soft masks as \( M^D_t \in \mathbb{R}^{V \times V} \) to extract the dynamic relationships and its complementary mask \( \bar{M}^D_{1:T} \) to extract the static relationships. Then, we have dynamic causal set \( G^D_{1:T} = (X_{1:T}, A_{1:T} \oplus M^C_{1:T} \oplus M^D_{1:T}) \) and static causal set \( G^S_{1:T} = (X_{1:T}, A_{1:T} \oplus M^C_{1:T} \oplus \bar{M}^D_{1:T}) \). Because the ground-truth trivial set, dynamic causal set, and static causal set are unavailable in real-world applications. We aim to capture the trivial, dynamic, and static relationships from the full graph by learning the masks. **Estimating soft mask:** Inspired by the VGAE framework (Kipf & Welling, 2016), we proposed a dynamic VGAE-based encoder-decoder to estimate the soft masks of explainable subgraphs. We first consider the estimation of the causal soft mask matrix \( M^C_{1:T} \). At the \( t \)-th time step, the causal soft mask matrix can be calculated as \[ M^C_t = f_v(X_{1:t}, A_{1:t}; \Theta_C) = p(M^C_t | H_t)q(H_t | G_{1:t}), \] where \( f_v \) is the encoder-decoder architecture with parameters \( \Theta_C \), \( q(\cdot) \) is the encoder module, \( p(\cdot) \) is the decoder module. The encoder utilizes posterior probabilities to encode the node embeddings into low-dimensional latent vector representations, which can be formulated as \[ q(H_t | G_{1:t}) = \Pi_{i=1}^N q(h_{t,i} | G_{1:t}), \quad q(h_{t,i} | G_{1:t}) = N(h_{t,i} | \mu_{t,i}, \text{diag}(\sigma^2_{t,i})), \] where \( H_t \) is the latent matrix. \( \mu_t \) and \( \sigma_t \) are means and variances of node latent embeddings learned by \( GCN_\mu(G_t) \) and \( GCN_\sigma(G_t) \) with different parameters. \( h_{t,i}, \mu_{t,i} \) and \( \sigma_{t,i} \) are the \( i^{th} \) column of \( H_t, \mu_t \) and \( \sigma_t \), respectively. We use the re-parameterization technique to avoid the problem that the sampling operation cannot be back-propagated and updated by gradient descent. Then the decoder utilizes latent representations to generate the dynamic explainable subgraph as follows $$p(M^C_t | H_t) = \prod_{i=1}^{N} \prod_{j=1}^{N} p(M^C_{t,i,j} | h_{t,i}, h_{t,j}), \quad p(M^C_{t,i,j} = 1 | h_{t,i}, h_{t,j}) = g(h_{t,i}, h_{t,j})$$ (4) where $M^C_{t,i,j}$ is the $i$th row and $j$th column of $M^C_t$ representing the probability of existence for edge $(v^i_t, v^j_t)$ at time $t$ in causal graph set and $g(\cdot, \cdot)$ calculates this probability. According to the above encoder-decoder framework, the causal soft mask $M^C_{1:T}$ can be estimated based on the input of dynamic graph set $G_{1:T}$. The soft causal mask assists in the formulation of the causal graph set $G^C_{1:T}$. According to the causal graph in Fig. 1, the static relationship $S$ can also be treated as the co-founder between $D$ and $Y$, just like the trivial relationship $T$ respect to $C$ and $Y$. Thus, we take the same VGAE-based encoder-decoder framework to generate the dynamic soft mask matrix, but with different parameters $\Theta_D$, have the following estimating process: $$M^D_t = f_v(X_{1:t}, A_{1:t} \oplus M^C_{1:t}; \Theta_D).$$ (5) Now we have the adjacency matrix of causal set $A^C_{1:T} = A_{1:T} \oplus M^C_{1:T}$, dynamic causal set $A^D_{1:T} = A_{1:T} \oplus M^C_{1:T} \oplus M^D_{1:T}$, and static causal set $A^S_{1:T} = A_{1:T} \oplus M^C_{1:T} \oplus M^D_{1:T}$. We need to disentangle the trivial, dynamic, and static relationships. We first disentangle the trivial relationship and the causal relationship via a contrastive learning method. Then, we propose a novel dynamic correlating method to disentangle the static relationship and dynamic relationship. **Disentangling trivial and causal:** According to the criteria in Section 2.1, the explanation should meet fidelity. Since the causal subgraph set is the target and the trivial graph set can be treated as noise, the outcome of the target model with the causal subgraph set should be more like the original graph set, while the trivial subgraph set should be treated as negative ones. To disentangle the information in the trivial subgraph set and the causal subgraph set, we need to extract the embedding from them via some dynamic methods. Fortunately, our target is to provide fidelity and interpretability for the pre-trained DyGNN. Thus, we utilize the pre-trained aggregation function to obtain such embedding, which is then utilized to assist in disentangling trivial and causal information. During each time step $t$, the aggregation function $f_a(\cdot)$ would generate the embeddings via extracting the essential information until time $t$ for original graph set $G_{1:t}$, causal subgraph set $G^C_{1:t}$ and trivial subgraph set $G^T_{1:t}$. More formally, we have $$R_t = f_a(G_{1:t}), R^C_t = f_a(G^C_{1:t}), R^T_t = f_a(G^T_{1:t}).$$ (6) The above outputs can be the node (graph) embedding for node (graph) classification tasks. We utilize contrastive learning to ensure the semantic similarity between the causal embedding $e^C_t$ and the original embedding $e_t$ while enlarging the semantic distance between the causal embedding $e^C_t$ and the trivial embedding $e^T_t$. Then, we have the following contrastive loss: $$L_c = \frac{1}{T} \sum_{t=1}^{T} \log \left( \exp \left( s(e_t, e^C_t)/\tau \right) + \alpha_1 \exp \left( s(e^T_t, e^C_t)/\tau \right) + \alpha_2 \sum_{k \neq t} \exp \left( s(e^T_t, e^C_k)/\tau \right) \right)$$ (7) where $\tau$ is the temperature coefficient, $s(\cdot, \cdot)$ measures the similarity, we utilize the dot product here. **Disentangling static and dynamic:** According to the structure causal model in Fig. 1, the causal relationship can be further divided into the static relationship and the dynamic relationship. To extract the dynamic relationship and static relationship from the dynamic causal set $G^D_{1:T}$ and static causal set $G^S_{1:T}$, we utilize GCN with learn-able parameters $\Psi_D$ and $\Psi_S$. $$H^D_t = GCN(A^D_t, X_t; \Psi_D), \quad H^S_t = GCN(A^S_t, X_t; \Psi_S).$$ (8) Dynamic relationships evolve over time steps but static relationships are independent across each time step. Specifically, the dynamic relationships in time step $t$ can be inferred from the history dynamic causal set $G^D_{1:(t-1)}$ while static information in time step $t$ is independent with history static causal graph set $G^S_{1:(t-1)}$. Formally, we have $$H^D_{1:(t-1)} \rightarrow H^D_t, \quad H^S_{1:(t-1)} \perp H^S_t.$$ (9) Note that using the history dynamic relationship $H_{1:(t-1)}^D$ to predict the dynamic relationship $H_t^D$ directly is trivial since the GCN with parameters $\Psi_D$ can map the graph set into embedding space without any useful information. Fortunately, we can use the pre-trained aggregation function $f_a(\cdot)$ again, which extracts the dynamic relationship from the original graph set. According to the fidelity criteria, the generated dynamic graph set should also guarantee that the pre-trained aggregation function can extract the dynamic relationship from it. Thus, we have the following dynamic loss: $$L_d = \frac{1}{T-1} \sum_{t=2}^{T} d(f_a(G_{1:(t-1)}^D), H_t^D),$$ where $d(\cdot,\cdot)$ measures the distribution distance. The dynamic loss guarantees that the dynamic relationships extracted from the dynamic causal graph set are highly correlated. The rest causal information would be independent, which identifies the static causal graph set. To make sure that the dynamic and static relationship can be disentangled as separately as possible, we would provide the sparsity constraints for the dynamic causal graph set shown later. **Spatial-temporal explanation:** According to the structural causal model, both the dynamic relationship and the static relationship are the key to the prediction and should be utilized to predict the ground-truth label. The learned causal soft mask and the dynamic soft mask can assist in spatial interpretability which highlights the most important causal parts and dynamic causal parts of dynamic graphs. However, this is not enough for temporal interpretability. Due to the highly temporal correlation for dynamic relationships, it would be difficult to disentangle the dynamic relationship. To address the challenge, we treat the dynamic relationship at time $t$ as an invention. With the help of aggregation function $f_a(\cdot)$, we measure the causal effect of this invention via the change of embedding. Formally, we define the causal effect at time $t$ as follows: $$\Delta H_t^D = f_a(G_{1:t}^D) - f_a(G_{1:(t-1)}^D).$$ We combine the causal effect for the dynamic relationship and the static relationship at time $t$ as the key causal information for $G_t$ and propose the learn-able weight pooling method to aggregate all the information across all time slots as follows: $$H_T = \sum_{t=1}^{T} t_p(\Delta H_t^D \oplus H_t^S) \Delta H_t^P \oplus H_t^S, \quad t_p(H) = \text{Softmax}(\Psi_P H / \| \Psi_P \|),$$ where $\Psi_P$ is the parameters to learn temporal importance $t_p(\Delta H_t^D \oplus H_t^S)$. $t_p(\Delta H_t^D \oplus H_t^S)$ provides importance of subgraphs over different time slots and thus assists in temporal interpretability. Based on pre-trained classifier $f_d(\cdot)$, we use aggregated embedding to explain the ground-truth label via prediction loss: $$L_p = l(f_d(H_T), Y),$$ where $l(\cdot,\cdot)$ is the entropy loss. To ensure human interpretability, the explained causal subgraph set should be sparsity. We take the sparsity requirement for both the causal graph set and the dynamic causal graph set via the sparsity loss: $$L_s = \sum_{t=1}^{T} \frac{\| A_t^C \|_1 + \| A_t^D \|_1}{\| A_t \|_1}.$$ In summary, we learn the optimal explainable causal subgraphs, dynamic subgraphs, and temporal importance by solving the following optimization problems: $$\min_{\Theta, \Psi} L(\Theta, \Psi) = \lambda_1 L_c + \lambda_2 L_s + \lambda_3 L_p + \lambda_4 L_d$$ where $\Theta = \{\Theta_C, \Theta_D\}$, $\Psi = \{\Psi_C, \Psi_D, \Psi_P\}$ and $\lambda_1, \lambda_2, \lambda_3,$ and $\lambda_4$ are hyper parameters. ### 3 Experiments #### 3.1 Experimental Settings **Datasets:** We utilize 4 synthetic datasets and 2 real-world datasets for the node classification task and the graph classification task. Table 1 shows the statistics of all datasets. Since our method is Table 1: Statistics of datasets for both node and graph classification. | Dataset | Node classification | Graph classification | |---------------|---------------------|----------------------| | | DBA-Shapes | DTree-Cycles | DTree-Grid | Elliptic | DBA-2motifs | MemeTracker | | #nodes | 700 | 871 | 1,231 | 203,769 | 25,000 | 3.3 mil. | | #edges | 4,110 | 1,950 | 3410 | 234,355 | 51,392 | 27.6 mil. | | #labels | 7 | 3 | 3 | 2 | 3 | 2 | Table 2: Explanation accuracy of different models (%). Where best performances are bold. | Task | Dataset | GNNExplainer | PGExplainer | Gem | OrphicX | DyGNNExplainer | |--------------|---------------|--------------|-------------|-----|---------|----------------| | Node cls. | DBA-Shapes | 92.1 | 92.9 | 93.6| 94.3 | **97.8** | | | DTree-Cycles | 92.8 | 93.7 | 94.4| 96 | **98.2** | | | DTree-Grid | 85.2 | 85.9 | 87.1| 90.5 | **94.2** | | | Elliptic | 92.4 | 94.1 | 94.6| 96.1 | **98.7** | | Graph cls. | DBA-2motifs | 86.5 | 88.0 | 90.7| 91.4 | **96.3** | | | MemeTracker | 88.2 | 89.2 | 91.0| 91.9 | **97.4** | ∗ indicates the statistically significant improvements (i.e., two-sided t-test with \( p < 0.05 \)) over the best baseline. ‘cls.’ is short for classification. the first study on dynamic graph interpretability, there are no directly available datasets suitable for the task of dynamic graph interpretability. So we dynamically transformed some commonly used static graph interpretability datasets. For node classification, we process three benchmark synthetic datasets BA-Shapes, Tree-Cycles, and Tree-Grid (Ying et al., 2019), as dynamic graph datasets DBA-Shapes, DTree-Cycles, and DTree-Grid. Furthermore, we utilize a real-world dynamic graph dataset Elliptic. For graph classification, we process a benchmark synthetic dataset BA-2motifs dataset (Luo et al., 2020) as dynamic graph dataset DBA-2motifs. And we utilize a real-word dataset MemeTracker (Leskovec et al., 2009). More details of datasets and the generation process are shown in Appendix A.1. Baselines: Since we are the first to explain DyGNNs, there are no existing dynamic graph interpretability benchmarks for comparison. And graph representation and graph generalization models are the target models we want to explain, there is no comparison between us and them. Consequently, we compare our approach against various powerful static interpretability frameworks for GNNs. They are GNNExplainer (Ying et al., 2019), PGExplainer (Luo et al., 2020), Gem (Lin et al., 2021), and OrphicX (Lin et al., 2022). For all these static baselines, we treat all nodes and edges as occurring simultaneously. More details about baselines and hyperparameter settings are shown in Appendix A.2. 3.2 Results Explanation fidelity: Explanation fidelity pertains to the accuracy of the explanations provided by various methods. To gauge this, we compare the predicted labels of explanatory subgraphs with the predicted labels of the original graphs as generated by the target model. For the static baselines, we simplify the target model by excluding temporal evolution. From the results presented in Table 2, our DyGNNExplainer surpasses all other baselines by a significant margin across all datasets, both for node classification and graph classification tasks. This underscores the fidelity of the explanations produced by our method, which, in contrast to static baselines, adeptly captures the intricate spatial-temporal correlations in dynamic graphs via our causality-inspired spatial-temporal structure. Causal-based methods OrphicX and Gem also outperform other baselines, affirming the effectiveness of causal inference in explanation tasks. However, these methods only differentiate between trivial and causal factors, disregarding the dynamic factor. Consequently, they fall short in explaining complex spatial-temporal relationships. In contrast, our method addresses this limitation by imposing two constraints. Explanation interpretability analysis: Interpretability implies that the explainer should emphasize the most crucial components of the input data while disregarding irrelevant elements. In other words, http://www.kaggle.com/ellipticco/elliptic-data-set Figure 2: Interpretability analysis and ablation study. (a) Sparsity analysis on DBA-Shapes dataset (b) Sparsity analysis on DTree-Cycles dataset. (c) Ablation study on DBA-shapes. K is the edge number of each explanation subgraph. ‘Or’ is the OrphicX model, and ‘Dy’ is our DyGNNExplainer. ‘w/o. \(L_d\)’, ‘w/o. \(L_c\)’, and ‘w/o. VGAE’ are DyGNNExplainer without dynamic loss, contrastive loss, and VGAE, respectively. Table 3: Prediction accuracy of different models (%). Where best performances are bold. | Dataset | GNNExplainer | PGExplainer | Gem | OrphicX | Target | DyGNNExplainer | |---------------|--------------|-------------|-----|---------|--------|----------------| | DBA-Shapes | 35.5 | 36.3 | 38.5| 38.7 | 40.2 | **44.6** | | Elliptic | 39.7 | 45.6 | 43.5| 47.8 | 84.3 | **89.2** | “*” indicates the statistically significant improvements (i.e., two-sided t-test with \(p < 0.05\)) over the best baseline. The explanation subgraphs should exhibit a high degree of sparsity. To quantify this sparsity, we measure the number of subgraph edges (denoted as \(K\)). A smaller number of selected edges implies a higher sparsity. We compare the explanation accuracy of DyGNNExplainer with OrphicX on DBA-Shapes and DTree-Cycles, varying the number of edges (\(K\)) in the subgraph. As illustrated in Figure 2 (a) and (b), our method outperforms OrphicX with fewer edges in the subgraphs. This superiority arises from our model’s adeptness at capturing spatial-temporal correlations in dynamic graphs, allowing it to encapsulate more critical information while ensuring interpretability. Prediction accuracy analysis: While we aim to construct an explainer to faithfully elucidate the inner workings of the target model, it is equally imperative to ascertain the consistency of the interpreted results with real-world facts. Consequently, we also compare the prediction accuracy of our method with other baselines and target model on a synthetic dataset DBA-Shapes and a real-world dataset Elliptic for node classification task. As shown in the Table 3, DyGNNExplainer outperforms all other baselines by a substantial margin on both datasets. Particularly noteworthy is its 41.4% performance advantage over the best baseline, OrphicX, in the real-world Elliptic dataset. This underscores that our method generates explanations that align not only closely with the target model but also with real-world ground-truth. Static baselines, due to their inability to capture spatial-temporal correlations, fall short in accurately predicting the ground truth. Intriguingly, our model even surpasses the target model in terms of prediction accuracy. This is attributed to our model’s ability to disentangle trivial, dynamic, and static relationships, thereby better capturing spatial-temporal correlations across graph time steps. Ablation study We also delve into an ablation study on the DBA-Shapes dataset to dissect the roles of dynamic and static causal relationships. As depicted in Figure 2 (c), we present several variants for comparison. In ‘w/o. VGAE’, we replace the encoder with a simple GCN layer. Our findings reveal that DyGNNExplainer consistently outperforms the ‘w/o. \(L_d\)’ and ‘w/o. \(L_c\)’ versions. The ‘w/o. \(L_d\)’ version, lacking the dynamic loss component, fails to effectively differentiate between dynamic and static factors, leading to an inability to capture temporal correlations across dynamic data. On the other hand, the ‘w/o. \(L_c\)’ version experiences a significant drop in performance. This decline can be attributed to its inability to distinguish between causal and trivial factors, resulting in noisy independence that hinders the DyGNN explanations. These observations underscore the efficacy of our imposed constraints in disentangling trivial, dynamic, and static factors. Furthermore, DyGNNExplainer also outperforms the ‘w/o. VGAE’ version. This is primarily due to the superior capabilities of VGAE in harnessing spatial graph information to generate soft masks. Case study To provide a more vivid demonstration of DyGNNExplainer’s interpretability, we present a case study on DBA-Shapes dataset. As depicted in Figure 3, we visually represent both the original graph and the top six weighted edges of the generated causal subgraph across all time steps using DyGNNExplainer. Additionally, we compare our results with those obtained using OrphicX. To streamline the visualization, we have omitted edges and nodes that do not belong to the ‘house’ motif or are not directly linked to the target node. The variable $t_p$ denotes the importance of each time step. Notably, there is a gradual upward trend in $t_p$ values. This trend can be attributed to the increasing completeness of the ‘house’ motif in later time steps, rendering them more crucial for the final interpretation. The weight assigned to the first time slice is considerably lower than that of subsequent time steps. This discrepancy is due to the fact that the recognizable pattern of ‘house’ motif has not yet to fully materialize in the initial time step. Additionally, the weight for the fourth time step does not significantly surpass that of the previous one, as no new edges are added to the motif during this interval. This observation underscores the excellent temporal interpretability of our approach. Furthermore, DyGNNExplainer effectively identifies the ‘house’ motif from the original graph in the final time step, which explains the target node label. In contrast, OrphicX erroneously attributes an edge outside of the ‘house’ motif. This discrepancy vividly illustrates the superior spatial interpretability of our method. 4 RELATED WORK A host of recent methods has emerged to provide explanations for Graph Neural Networks (GNNs) (Sui et al., 2022), focusing on identifying the most influential features (e.g., nodes, edges, or subgraphs) in input graphs to explain model predictions. These methods predominantly aim to generate input-dependent explanations. GNNExplainer (Ying et al., 2019) seeks soft masks for edges and node features through mask optimization to explain predictions. And Shokri et al. extend explanation methods designed for Convolutional Neural Networks (CNNs) to GNNs. However, these methods typically explain each instance individually and lack the ability to generalize graphs, limiting their global interpretability of the target model. Recognizing the issue of hindsight bias and the compromise of faithfulness when separately optimizing for each instance, PGExplainer (Luo et al., 2020) proposes learning a mask predictor for edge masks to provide explanations. XGNN (Yuan et al., 2020) focuses on investigating graph patterns leading to specific classes. In contrast to these approaches, our work leverages causality to achieve faithful explanations, distinguishing it from existing methods. More related work about causal inference is shown in Appendix A.3. 5 CONCLUSION In conclusion, our work has addressed the critical challenges associated with interpretability in Dynamic Graph Neural Networks (DyGNNs). Our research has pioneered the development of DyGNN explanation, a novel approach tailored to the unique characteristics of dynamic graphs. Our experimental results, encompassing synthetic and real-world datasets, have demonstrated the superior performance of DyGNNExplainer in both explanation tasks and real predictions. Furthermore, we contribute to the field by generating synthetic dynamic datasets tailored for dynamic graph interpretability tasks, which lays the foundation for future developments in the field of dynamic graph analysis and interpretation. ACKNOWLEDGEMENT This work is supported by National Key R&D Program of China under Grant 2022YFA1003900, Internal Project of Shenzhen Research Institute of Big Data under Grant J00220230001 REFERENCES Fabian Beck, Michael Burch, Stephan Diehl, and Daniel Weiskopf. A taxonomy and survey of dynamic graph visualization. In *Computer graphics forum*, volume 36, pp. 133–159. Wiley Online Library, 2017. Tanya Y Berger-Wolf and Jared Saia. A framework for analysis of dynamic social networks. In *Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining*, pp. 523–528, 2006. Clive WJ Granger. Investigating causal relations by econometric models and cross-spectral methods. *Econometrica: journal of the Econometric Society*, pp. 424–438, 1969. Derek Greene, Donal Doyle, and Padraig Cunningham. Tracking the evolution of communities in dynamic social networks. In *2010 international conference on advances in social networks analysis and mining*, pp. 176–183. IEEE, 2010. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint arXiv:1412.6980*, 2014. Thomas N Kipf and Max Welling. Variational graph auto-encoders. *arXiv preprint arXiv:1611.07308*, 2016. Jure Leskovec, Lars Backstrom, and Jon Kleinberg. Meme-tracking and the dynamics of the news cycle. In *Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining*, pp. 497–506, 2009. Wanyu Lin, Hao Lan, and Baochun Li. Generative causal explanations for graph neural networks. In *International Conference on Machine Learning*, pp. 6666–6679. PMLR, 2021. Wanyu Lin, Hao Lan, Hao Wang, and Baochun Li. Orphicx: A causality-inspired latent variable model for interpreting graph neural networks. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 13729–13738, 2022. Dongsheng Luo, Wei Cheng, Dongkuan Xu, Wenchao Yu, Bo Zong, Haifeng Chen, and Xiang Zhang. Parameterized explainer for graph neural network. *Advances in neural information processing systems*, 33:19620–19631, 2020. Franco Manessi, Alessandro Rozza, and Mario Manzo. Dynamic graph convolutional networks. *Pattern Recognition*, 97:107000, 2020. Diego C Nascimento, Bruno A Pimentel, Renata MCR Souza, Lilia Costa, Sandro Gonçalves, and Francisco Louzada. Dynamic graph in a symbolic data framework: An account of the causal relation using covid-19 reports and some reflections on the financial world. *Chaos, Solitons & Fractals*, 153:111440, 2021. Matthew O’Shaughnessy, Gregory Canal, Marissa Connor, Christopher Rozell, and Mark Davenport. Generative causal explanations of black-box classifiers. *Advances in neural information processing systems*, 33:5453–5467, 2020. Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kanezashi, Tim Kaler, Tao Schardl, and Charles Leiserson. Evolvegcn: Evolving graph convolutional networks for dynamic graphs. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pp. 5363–5370, 2020. Judea Pearl. *Causality*. Cambridge university press, 2009.
uHVIxJGwr4
Could the authors clarify what constitutes a 'transition' in this context? Does the transition include (s,a,s’) even when FSB is not employed in VHB, which is 0.05 times? Do you discard any transition? How is it ensured that you explore a wide array of instances before 100K transitions are collected?
Learning to Branch with Offline Reinforcement Learning Anonymous authors Paper under double-blind review Abstract Mixed Integer Linear Program (MILP) solvers are mostly built upon a branch-and-bound (B&B) algorithm, where the efficiency of traditional solvers heavily depends on hand-crafted heuristics for branching. Such a dependency significantly limits the success of those solvers because such heuristics are often difficult to obtain, and not easy to generalize across domains/problems. Recent deep learning approaches aim to automatically learn the branching strategies in a data-driven manner, which removes the dependency on hand-crafted heuristics but introduces a dependency on the availability of high-quality training data. Obtaining the training data that demonstrates near-optimal branching strategies can be a difficult task itself, especially for large problems where accurate solvers have a hard time scaling and producing near-optimal demonstrations. This paper overcomes this obstacle by proposing a new offline reinforcement learning (RL) approach, namely the Ranking-Constrained Actor-Critic algorithm, which can efficiently learn good branching strategies from sub-optimal or inadequate training signals. Our experiments show its advanced performance in both prediction accuracy and computational efficiency over previous methods for different types of MILP problems on multiple evaluation benchmarks. 1 Introduction Combinatorial optimization (CO) has been a fundamental challenge in computer science for decades, with a wide range of real-world applications, including supply chain management, logistics optimization (Chopra & Meindl [2001]), workforce scheduling (Ernst et al. [2004]), financial portfolioing (Rubinstein [2002] Lobo et al. [2007]), compiler optimization (Trofin et al. [2021] Zheng et al. [2022]), and more. Many of those CO problems can be formulated within a generic framework of Mixed Integer Linear Programs (MILPs), which is a central focus in algorithm development. Traditional MILP solvers recursively apply a divide-and-conquer strategy to decompose a MILP into sub-problems with additional bounds on the variables in a tree-based search, namely the Branch-&-Bound (B&B) (Land & Doig [1960]), until an optimal solution is found. Off-the-shelf solvers of this kind include SCIP (Achterberg [2009]), CPLEX (Cplex [2009]), and Gurobi (Gurobi Optimization [2021]). In each iteration, the system solves the relaxed linear program (LP) on a selected node (sub-problem) over the search tree and uses the LP solution (if it contains any fractional variable) to further divide the current problem into two sub-problems. Such traditional solvers heavily rely on hand-craft domain-specific heuristics for branching, which limits their true success and the capability to generalize across domains. The recent machine learning research has offered new ways to solve MILPs by replacing the need for hand-crafted heuristics with automatically learned heuristics from training data (Gasse et al. [2019] Nair et al. [2020b] Scavuzzo et al. [2022] Parsonson et al. [2023]). As a representative example, (Gasse et al. [2019]) formulated the MILP with a bipartite graph with variable nodes on the left and constraint nodes on the right, and trained a Graph Neural Network (GNN) to predict the promising variables for branching in B&B. Followup works include the improvements of the GNN models for scaling up (Nair et al. [2018] Gupta et al. [2020]) and enhanced solutions (Zarpellon et al. [2021] Qu et al. [2022] Huang et al. [2023b]). All of these models are trained via imitation learning (IL), and they thus have one limitation in common, i.e., the effectiveness relies on the availability of highly-quality training data that demonstrates near-optimal branching strategies such as the full strong branching. strategy (Achterberg et al., 2005b). Obtaining such training data can be difficult or highly costly in practice, especially for very large graphs where state-of-the-art (SOTA) MILP solvers cannot scale up to produce high-quality demonstrations. Addressing the obstacle, some reinforcement learning (RL) methods have been proposed (Sun et al., 2020; Scavuzzo et al., 2022; Parsonson et al., 2023), which support the learning from scratch without any demonstrations. Nonetheless, those RL-based methods rely on time-consuming online interactions with the solver, which can only be trained over easy MILPs solved in minutes, and have bad transfer performance (i.e., to be trained on small graphs and tested on large graphs) on evaluation benchmarks (Scavuzzo et al., 2022). This paper introduces a novel offline RL approach, namely Ranking-Constrained Actor-Critic (RCAC), to address the aforementioned limitations in learning to branch. Different from the standard RL models which rely on online interactions with the environment for collecting training signals, offline RL is directly trained over a static data set, which is pre-collected with a certain behavior policy from the environment. Nonetheless, similar to online RL, offline RL harnesses reward information to train the model rather than merely duplicating the training-set behavior, which is the case with imitation learning (IL). Consequently, offline RL can inherit the exploration capability from RL-based MILP solvers on the one hand, and can also significantly reduce the computational cost in training data generation on the other hand. As far as we know, RCAC is the first attempt to apply the offline RL algorithms to MILP solving. Our empirical results demonstrate the applicability of RCAC to various types of MILPs in the settings of both exact solving (without time constraints) and time-constrained solving. RCAC consistently outperforms the representative baseline methods across 6 benchmark datasets in terms of both branching quality and training efficiency, including those with hand-crafted heuristics, the IL-based methods, and previous RL-based methods. We present evidence that RCAC behaves better when trained on either sub-optimal datasets containing sparse good demonstrations or small near-optimal datasets collected within a short time. In short, our findings suggest that RCAC holds promise as a potent neural MILP solver for practical applications. 2 BACKGROUND 2.1 THE B&B ALGORITHM Each Mixed Integer Linear Program (MILP) is defined by a linear object, linear constraints, and integrality constraints, which can be formally expressed as $$\min c^\top x, \text{ s.t. } Ax \leq b, \quad x \in \mathbb{Z}^p \times \mathbb{R}^{n-p},$$ where $c \in \mathbb{R}^n$ represents the objective coefficient vector, $A \in \mathbb{R}^{m \times n}$ the constraint coefficient matrix, $b \in \mathbb{R}^m$ the constraint right-hand-side, and $p \leq n$ the number of integer variables. When the integrality constraints are disregarded, we can obtain a linear program (LP) and solve it efficiently with algorithms like the Simplex algorithm. This process is known as linear programming relaxation which will give a lower bound for the original problem since it is solved on a larger feasible region. If the LP relaxed solution $x^{LP}$ happens to be integral, then $x^{LP}$ is also guaranteed to be the optimal solution for the original MILP and we are done with the solving. Otherwise, there must be a set of variables $C$ such that $x^{LP}[i]$ is fractional for $i \in C$. The B&B algorithm then selects a variable from $C$ to partition the problem into two child problems, with the additional constraint $$x[i] \leq \lfloor x^{LP}[i] \rfloor \quad \text{or} \quad x[i] \geq \lceil x^{LP}[i] \rceil.$$ This partition process is known as variable selection or branching. With multiple subproblems in hand, each time B&B algorithm will select a subproblem to explore. B&B tracks two pivotal values throughout the solving, the global primal bound (lowest objective value for all feasible solutions) and dual bound (highest objective value for all relaxed solutions), and it continues iterating through the aforementioned steps until the primal bound converges with the dual bound. The quality of the branching policy has a high impact on the computational cost of B&B. The branching policy needs to balance the size of the search tree and the computational cost for obtaining the branching decision. Among the current heuristics, full strong branching (FSB) computes the actual change in the dual bound by solving the resultant subproblem for each fractional variable, which usually achieves the smaller search tree than competing methods (Achterberg et al., 2005a). However, the computational cost for obtaining the actual bound change itself is expensive. Instead, pseudocost branching (PB) (Achterberg et al., 2005a) conducts a fast estimation of the change in bound by averaging the previous changes after branching on each variable, which is faster to compute at the cost of a larger search tree (Achterberg et al., 2005a). In modern solvers, a hybrid branching strategy known as reliability pseudocost branching (RPB) is adopted, which uses FSB at the start of B&B and switches to PB for the remaining steps (Achterberg et al., 2005a). ### 2.2 Reinforcement Learning Formulation for B&B In standard reinforcement learning (RL), an agent continually interacts with the environment, typically modeled as a Markov Decision Process (MDP). An MDP is defined by a tuple \((S, A, p, r, \rho_0, \gamma)\), where \(S\) and \(A\) represent state and action spaces, \(p(s'|s, a) : S \times A \times S \rightarrow [0, 1]\) and \(r(s, a) : S \times A \rightarrow \mathbb{R}\) represent the state transition and reward functions, \(\rho_0(s)\) denotes the initial state distribution and \(\gamma \in [0, 1)\) is the discount factor. RL aims to find a policy \(\pi(a|s) : S \rightarrow A\) that maximizes the expected cumulative discounted rewards, also known as the expected return, denoted as \(J(\pi) = \mathbb{E}_{s_0 \sim \rho_0(), a_t \sim \pi(\cdot|s_t), s_{t+1} \sim p(\cdot|s_t, a_t)}[\sum_{t=0}^{\infty} \gamma^t r(s_t, a_t)]\). For each policy \(\pi\), it has a corresponding value function \(Q^\pi(s, a)\) which quantifies the expected return when following the policy \(\pi\) after taking action \(a\) at the state \(s\), \[ Q^\pi(s, a) = \mathbb{E}_{a_t \sim \pi(\cdot|s_t), s_{t+1} \sim p(\cdot|s_t, a_t)}[\sum_{t=0}^{\infty} \gamma^t r(s_t, a_t)|s_0 = s, a_0 = a]. \] Assume the reward is bounded, i.e., \(|r(s, a)| \leq R_{max}\), the value function \(Q^\pi\) could be computed by iteratively applying the Bellman operator \(T^\pi Q(s, a) = r(s, a) + \mathbb{E}_{s' \sim p(\cdot|s, a), a' \sim \pi(\cdot|s')}[Q^\pi(s', a')]\). When \(\gamma \in [0, 1)\), the Bellman operator is a contraction (Bertsekas & Tsitsiklis, 1996) with the unique fixed point \(Q^\pi(s, a)\). For the standard Actor-Critic algorithms with the parameterized policy \(\pi_\phi\) (actor) and Q-network \(Q_\theta\), the update is conducted alternatively between the policy evaluation (Equation 4) and policy improvement (Equation 5), \[ \theta \leftarrow \arg \min \mathbb{E}_{(s, a, s')}[(r(s, a) + \gamma \mathbb{E}_{a' \sim \pi_\phi(\cdot|s')}[Q_{\theta'}(s', a')] - Q_\theta(s, a))^2], \] \[ \phi \leftarrow \arg \max \mathbb{E}_s \mathbb{E}_{a \sim \pi_\phi(\cdot|s)}[Q_\theta(s, a)], \] where \(Q_{\theta'}\) is a slowly updated target Q-function used for a stable estimation of the target Q-value. The branching inside B&B could also be formulated as an MDP process, with the brancher being the agent and the solver being the environment. Starting from the root node \(s_0\), each time the brancher receives the current B&B search tree as the state \(s\) and selects a variable \(a\) from the set of all fractional variables \(A(s)\) for branching. It then receives a manually defined reward \(r(s, a)\), and the solver will partition the problem accordingly to update the search tree to the next state \(s'\). By choosing a reasonable reward function, we can apply RL algorithms to automatically learn a branching policy maximizing the expected return. ### 2.3 Offline Reinforcement Learning In contrast to standard RL operating in an online setting, offline RL dispenses with real-time interaction. Instead, it trains a policy using a pre-collected dataset \(D = \{(s, a, s', r(s, a))\}\). The policy responsible for generating this dataset is referred to as the behavior policy \(\pi_\beta(a|s)\). Behavior cloning (BC), one type of imitation learning (IL) method, simply estimates the conditional action distribution from the samples in \(D\) via supervised learning. The performance of BC is highly dependent on the quality of the behavior policy and it typically assumes that the behavior policy is close enough to the optimal policy \(\arg \min_\pi J(\pi)\). In B&B, FSB is usually chosen as the behavior policy for training data generation. Although FSB generally achieves high-quality branching, it could still become sub-optimal when the linear programming relaxation is uninformative or there exists dual degeneracy (Gamrath et al., 2020). Moreover, it is time-consuming to obtain the demonstrations from FSB when it comes to large and hard MILP instances. In comparison, offline RL can make use of the reward information to evaluate the action value like standard RL does, making it a better choice when \(\pi_\beta(a|s)\) is sub-optimal or \(D\) is noisy (Kumar et al., 2022). However, applying online RL algorithms directly to offline RL is also challenging due to the distributional shift between $\pi_\phi$ and $\pi_\beta$ (Kumar et al., 2019; Wu et al., 2019; Jaques et al., 2019; Levine et al., 2020). The Bellman operator in Equation 4 relies on the actions $a'$ sampled from $\pi_\phi(\cdot|s')$ to estimate target Q-values. When $a'$ falls outside the distribution of actions in dataset $D$, its Q-value estimation could be arbitrarily wrong. Consequently, $\pi_\phi$ may be biased towards those out-of-distribution (OOD) actions with an erroneously high value when it is optimized to maximize the expected Q-values in Equation 5. Such an error could be corrected via the attempt in the online setting but it can only be avoided in offline RL by constraining the policy $\pi_\phi$ from querying OOD actions. Offline RL algorithms are featured in the implementation of this constraint and we will introduce more details about our solution in the next section. 3 METHOD 3.1 REWARD FUNCTION FOR BRANCHING There are multiple ways to measure the quality of the branching strategy such as the solving time, the size of the B&B search tree, the number of iterations spent in solving LP, and dual integrals. We finally choose the improvement of the dual bound, $|c^T x_{LP}^{t+1} - c^T x_{LP}^t|$, as the reward function due to the following reasons. First, different from the metrics involving time measurements like per-step solving time and dual integrals, its value is not dependent on the time cost for obtaining the branching decision and is invariant to operating systems. Second, the dual-bound improvement can somewhat serve as a direct indicator of the quality of the branching decision that led to its value, whereas metrics such as the number of LP iterations and the change in the search tree’s size do not provide this insight. Finally, the cumulative discounted value of the dual bound improvement is still informative when an MILP instance is not solved exactly but stopped at a given time limit, compared with the cumulative discounted number of LP iterations or increased size of the search tree. The discount factor $\gamma$ also favors an early improvement of the dual bound in the expected return as the dual integral does. 3.2 RANKING-CONSTRAINED ACTOR-CRITIC ALGORITHM Since the Q-value estimated at a rarely explored action is imprecise in offline RL, one intuitive approach is to restrict the policy $\pi_\phi$ from selecting actions that have a low probability density in the dataset (Wu et al., 2019; Fujimoto et al., 2019; Kumar et al., 2019). Nevertheless, when the behavior policy is sub-optimal, those high-quality actions almost surely have a low probability density and will be unavoidably excluded by such a strict constraint. In fact, a good action does no harm to policy optimization even if it is an OOD action. Therefore, our idea is to balance the quality and probability density of actions when we try to filter out those toxic OOD actions. Normally, there is no way to tell if an action is good or not in offline RL until we have evaluated its Q-value. While in B&B, we can use the dual-bound improvement it brings, which is also the reward function we use, to coarsely evaluate its branching quality as FSB does. So we first train a scoring function $G_\omega(a|s)$ which uses different weights to maximize the log-likelihood of an action in the dataset given the reward it obtains. The training objective could be expressed as $$\arg \min_\omega \mathbb{E}_{(s,a,r(s,a)) \sim D}[-(\lambda 1_{r(s,a) > \zeta} + 1) \log G_\omega(a|s)],$$ where $\lambda \geq 0$ is a factor promoting the actions leading to a dual-bound improvement greater than $\zeta$. In most cases, $\zeta$ can be simply set as zero due to the sparse reward nature of the environment. We then constrain the policy $\pi_\phi$ by depressing the Q-value for actions out of the top $k$ candidates at state $s$ ranked by $G(a|s)$ using a large negative value $-\delta$, i.e., $$Q(s, a) = \begin{cases} Q(s, a), & \text{if } a \in \text{top } k(G_\omega(a|s)), \\ -\delta, & \text{otherwise}. \end{cases}$$ We then refine the policy evaluation and policy improvement in the Actor-Critic algorithm as $$\theta \leftarrow \arg \min_\theta \mathbb{E}_{(s,a,s') \sim D}[(r(s, a) + \gamma \mathbb{E}_{a' \sim \pi_\phi(\cdot|s')}[\tilde{Q}_\theta(s', a')] - Q_\theta(s, a))^2],$$ $$\phi \leftarrow \arg \max_\phi \mathbb{E}_{s \sim D, a \sim \pi_\phi(\cdot|s)}[\tilde{Q}_\theta(s, a)].$$ We refer to our method as Ranking-Constrained Actor-Critic (RCAC) algorithm. The ranking constraint could alternatively realized by the relative ranking as top $k\%$ candidates, but using an absolute rank is more friendly to the batch operation in neural network training. During inference, the action $\arg \max_a \pi_\phi(a|s)$ is used at each step. Our algorithm could be summarized as follows **Algorithm 1 Ranking-Constrained Actor-Critic** 1: **Input:** Dataset $D = \{(s, a, s', r(s, a))\}$, 2: Randomly initialize ranking model $G_\omega$, policy network $\pi_\phi$ and Q-network $Q_\theta$ 3: Pretrain $G_\omega$ with the loss defined in Equation[6] 4: **for** iteration $i = \{1, \cdots, I\}$ **do** 5: Sample a batch of transitions $B$ from $D$ 6: $\theta' \leftarrow \arg \min_\theta \mathbb{E}_{(s,a,s',r(s,a)) \sim B}[(r(s, a) + \gamma \mathbb{E}_{a' \sim \pi_\phi(\cdot|s')} [Q_{\theta'}(s', a')] - Q_\theta(s, a))^2]$ 7: $\phi' \leftarrow \arg \max_\phi \mathbb{E}_{s \sim B, a \sim \pi_\phi(\cdot|s)} [Q_\theta(s, a)]$ 8: $\theta' \leftarrow \tau \theta' + (1 - \tau) \theta$ 9: **end for** 10: **return** $\pi_\phi$ ### 3.3 Modeling the B&B Tree We use a bipartite graph representation for the B&B node, where $G = (V, C, E)$, with variable node features $V \in \mathbb{R}^{n \times d_1}$, constraint node features $C \in \mathbb{R}^{m \times d_3}$ and edge features $E \in \mathbb{R}^{n \times m \times d_2}$. We use the same features and GNN architecture from Gasse et al. [2019], where the model architecture is kept the same for $G_\omega$, $\pi_\phi$ and $Q_\theta$. We normalize both node and edge features in the dataset. For example, given the $i$-th feature of a node $j$, we will normalize it as $V[j,i] \leftarrow (V[j,i] - \mu_i^v)/(\sigma_i^v)$, where $\mu_i^v$ and $\sigma_i^v$ are the estimated mean and standard deviation for the $i$-th dimension of node features. The constraint features and edge features are similarly processed. ### 4 Experiments #### 4.1 Experimental Setup **Baselines.** We compare our method to two classical branching heuristics, full strong branching (FSB) and reliability pseudocost branching (PRB), and two neural methods, including the online RL method tree MDP (tMDP) [Scavuzzo et al., 2022], and IL method GGCN [Gasse et al., 2019]. Besides, we also design a vanilla hybrid branching (VHB) heuristic, which adopts FSB with probability 0.05 at each decision step and uses the pseudocost branching otherwise. VHB will serve as one type of behavior policy for our demonstration collection. **Metrics.** We use two different types of metrics in previous evaluations [Gasse et al., 2019; Nair et al., 2018; Gasse et al., 2022] for B&B methods. The first type of metric evaluates the model’s efficiency for exact solving without any time constraint, including the total solving time and the size of the B&B search tree (measured by its number of nodes). The former is a universal metric to compare both neural methods and hand-crafted heuristics, while the latter is more straightforward for comparison when the decision time is basically the same, as in the neural methods. The second type of metric evaluates the quality of the dual bound when solving is constrained by a given time limit $T$, here we use the dual-integral metric, $Tc^\top x^* - \int_{t=0}^{T} z_t^* dt$, where $c^\top x^*$ is the optimal objective value and $z_t^*$ is the best dual bound at time $t$. $T$ is set as 15 minutes in our experiment. **Benchmarks.** We evaluate our method on six commonly used MILP benchmarks, including four synthetic easy problems and two hard problems from real-world applications, as listed in Table[1]. They are categorized as easy and hard problems according to the time for exact solving, where MILP instances from easy problems can all be solved by SCIP (version 7.0.3) within 10 minutes, and MILP instances from hard problems take SCIP more than 1 hour to finish on average. The four synthetic easy problems include Set Covering (SC), Maximal Independent Set (MIS), Combinatorial Auction (CA) and Capacitated Facility Location (CFL). We follow the same instance generation process as in Gasse et al. [2019] to generate 10,000 MILP instances for training, 2,000 instances for validation, and 20 instances for testing on each problem. We use the solving time and search tree size as the evaluation metric since their instances can all be solved in a short time. The two hard problems, Workload Apportionment (WA) and Anonymous Problem (AP), are from the ML4CO competition (Gasse et al., 2022). We use their existing training, validation, and testing split. In light of the difficulty in solving their instances exactly, we use the dual integral as the evaluation metrics for these two problems. Since obtaining the optimal solution \( x^* \) is hard in practice and it does not affect the comparison among methods, we directly report the score from the ML4CO evaluation script, which is a negated unshifted version of the dual integral intended to be maximized. Additional details about the instances for each benchmark are available in the Appendix. To generate the demonstrations for training, we consider two different scenarios. In the first scenario, we assume we only have access to a sub-optimal heuristic on a certain problem. We simulate this heuristic with VHB and use it to generate a dataset with 100,000 transitions. In the second scenario, we still have access to the near-optimal heuristic, but due to its expensive cost, we can only generate a small dataset for training. We use FSB, which has been empirically shown to be near-optimal on the benchmarks we consider, to generate 5,000 transitions on each problem, whose size is only 5% of the standard dataset size used for training previous IL methods (Gasse et al., 2019; Gupta et al., 2020). We compare the generation time for both our datasets and standard datasets in Table 1. | Dataset Prefix | Problem | Time for Our VHB Dataset | Time for Our FSB Dataset | Time for Standard FSB Dataset | |---------------|--------------------------|--------------------------|--------------------------|------------------------------| | SC | Set Covering | 0.3 h | 0.2 h | 1.0 h | | MIS | Maximum Independent Set | 1.1h | 0.4 h | 4.5 h | | CA | Combinatorial Auction | 0.2 h | 0.1 h | 0.8 h | | CFL | Capacitated Facility Location | 2.2 h | 1.0 h | 7.2 h | | WA | Workload Apportionment | 21.2 h | 13.3 h | 266.4 h | | AP | Anonymous Problem | 1.1 h | 1.1 h | 6.4 h | Table 1: Dataset Collection Statistics. We employ 20 parallel SCIP solvers to collect demonstrations for each dataset. The collection time is in hours. Results show that collecting demonstrations for the standard FSB dataset is much more expensive than our VHB dataset and small FSB dataset. ### 4.2 Efficiency for Exact Solving We first evaluate RCAC in its efficiency for the exact solving of MILPs. We train RCAC and GGCN on both the sub-optimal dataset collected by VHB (denoted with ‘H’) and a small near-optimal dataset collected by FSB (denoted with ‘S’). Five random seeds are used during the training and testing for each method. We compare the solving time and the size of the search tree in Figure 1 and report the mean and standard deviation in Table 2 and 3. ![Figure 1](image.png) Figure 1: Comparison among all methods in the solving time (left) and size of the search tree (right) on SC, MIS, CA, and CFL. The y-axis is in the log scale. Compared with non-neural baselines, both RCAC and GGCN have shown clear advantages in solving MILPs exactly with less time, though trained on a sub-optimal dataset or a smaller near-optimal dataset. Besides, it can be clearly observed that RCAC is better than GGCN across all benchmarks and two types of training datasets in both solving time and the number of nodes, especially on MIS and CA. Different from GGCN which simply learns the conditional action distribution from | Model | SC | MIS | CA | CFL | |---------|-------------|-------------|-------------|-------------| | | Time (s) ↓ | Time (s) ↓ | Time(s) ↓ | Time(s) ↓ | | FSB | 11.61 ± 0.28| 134.38 ± 5.50| 140.26 ± 2.55| 98.65 ± 9.87| | RPB | 2.31 ± 0.04 | 7.10 ± 0.17 | 5.51 ± 0.04 | 25.23 ± 0.98| | VHB | 2.76 ± 0.14 | 44.37 ± 6.64 | 31.45 ± 1.56 | 26.85 ± 1.91| | tMDP | 12.24 ± 0.05| 6.92 ± 2.94 | 3.56 ± 0.10 | 24.06 ± 0.41| | GGCN (H)| 1.78 ± 0.05 | 5.86 ± 0.31 | 5.06 ± 0.23 | 20.71 ± 1.66| | RCAC (H)| 1.78 ± 0.04 | 4.64 ± 0.19 | 3.22 ± 0.09 | 19.94 ± 0.50| | GGCN (S)| 1.76 ± 0.07 | 4.29 ± 0.13 | 4.05 ± 0.11 | 22.63 ± 0.96| | RCAC (S)| 1.73 ± 0.04 | 4.13 ± 0.14 | 3.15 ± 0.06 | 22.47 ± 1.31| Table 2: Comparative results in time for exact solving on SC, MIS, CA and CFL. We bold the best results and color the second-best results in green on each dataset. | Model | SC | MIS | CA | CFL | |---------|-------------|-------------|-------------|-------------| | | # Nodes ↓ | # Nodes ↓ | # Nodes ↓ | # Nodes ↓ | | FSB | 46.0 ± 0.1 | 73.0 ± 4.5 | 559.7 ± 7.51| 150.7 ± 4.7 | | RPB | 28.0 ± 2.4 | 96.7 ± 14.3 | 840.2 ± 49.6| 86.1 ± 12.2 | | VHB | 99.1 ± 6.8 | 677.0 ± 141.4| 2330.3 ± 115.9| 228.4 ± 12.1| | tMDP | 254.9 ± 20.9| 1163.1 ± 1295.9| 1136.2 ± 55.1| 316.1 ± 30.4| | GGCN (H)| 80.8 ± 7.3 | 454.5 ± 77.8 | 1471.6 ± 94.5| 235.2 ± 14.9| | RCAC (H)| 80.3 ± 13.7 | 185.2 ± 34.8 | 774.9 ± 27.8 | 211.9 ± 14.2| | GGCN (S)| 69.3 ± 8.2 | 127.7 ± 29.9 | 1040.0 ± 36.0| 246.5 ± 22.1| | RCAC (S)| 65.9 ± 6.1 | 96.4 ± 16.8 | 718.8 ± 24.5 | 242.3 ± 15.7| Table 3: Comparative results in the size of search tree for exact solving on SC, MIS, CA and CFL. Human heuristics and neural methods are above and under the line separately. We bold the best results and color the second-best results for neural methods in green on each dataset. the dataset, RCAC can utilize the reward information to evaluate the quality of actions. We want to highlight that such a capability can also explain the success of RCAC on smaller near-optimal datasets. Typically, a high-quality branching decision in the first few steps of B&B will have a more profound impact on the size of the search tree, as the spirit of RPB suggests. Since the dual-bound improvement is also larger at the earlier stage, Equation 5 will then encourage RCAC to place more emphasis on learning good actions in the first few steps of B&B due to a large Q-value at this time. While GGCN just equally imitates the branching decisions at all periods of B&B, resulting in an inferior performance than RCAC when the data is in short. Finally, although tMDP could sometimes achieve a good performance such as on CA, its overall performance is still worse than GGCN and RCAC trained on a sub-optimal or a small near-optimal dataset, not to mention its overwhelming training time which could amount to six days. In comparison, the data collection and training of RCAC only takes a few hours and it achieves a much better branching performance. Therefore, though based on the same motivation to overcome the limitations in collecting datasets with FSB, training RCAC from sub-optimal or small near-optimal datasets is clearly better than training an RL agent from scratch. All these findings combine to justify RCAC’s advantage over both IL and RL methods in learning to branch for exact solving. 4.3 Dual Integral for Time-constrained Solving We then evaluate RCAC on two hard problems, WA and AP, in the dual-integral score. We exclude tMDP on these two datasets due to its long training time and bad performance on easy problems. We evaluate each model on 20 testing instances from the official split and report the best results for each model. We compare the model performance in Figure 2 and Table 4. Basically, neural methods do not show a very strong advantage against non-neural methods possibly due to the hardness of the problems themselves. But we can still see that RCAC shows some promising signals. When trained on a small near-optimal dataset, RCAC takes the lead of all methods on WA and is the second-best one on AP. Besides, RCAC outperforms GGCN when trained on both types of datasets, especially on AP where dense rewards exist in the environment. This evidence indicates RCAC’s potential to improve training efficiency on hard problems like WA, where the data collection time could amount to days or weeks. | Model | Score ↑ | Score ↑ | |---------|---------|---------| | FSB | 633653 | 25411832| | RPB | 634846 | **27368259**| | VHB | 633837 | 25411230| | tMDP | - | - | | GGCN (H)| 635072 | 25308238| | RCAC (H)| **635099**| 25311504| | GGCN (S)| 635074 | 25430097| | RCAC (S)| **635103**| **25564703**| Table 4: Comparative results in the score (negated and shifted version of dual-integral, to be maximized) on WA and AP. We bold the best results and color the second-best results in green on each dataset. ### 4.4 Ablation Study Although our ranking model relies on dual-bound information to roughly evaluate the quality of a branching decision similar to FSB, RCAC is still different from imitating FSB since it maximizes not the instant dual-bound change but the long-term cumulative rewards. To further understand the source of improvement from RCAC, we use the models trained on the hybrid branching dataset as an example for ablation. We compare the testing performance of the pretrained $G_\omega$ used for the final training of RCAC in Table 5. Basically, it is undeniable that the strong performance of $G_\omega$ largely contributes to the improvement in RCAC, where $G_\omega$ has already shown a comprehensive advantage against GGCN trained on the same dataset. But we can also observe that in most cases, RCAC can further improve the performance of $G_\omega$, which is most prominent on CA. To further understand the exploration ability of RCAC, we visualize the effect of $k$ on RCAC’s performance on the CA dataset in Figure 3. It can be seen that as $k$ increases, the number of nodes keeps decreasing. This suggests that RCAC is not simply doing knowledge distillation (Gupta et al., 2020) from $G_\omega$ but learning to evaluate the Q-value for the top candidates ranked by $G_\omega$ and maximize the expected return. ### 5 Related Work #### 5.1 Neural MILP Solvers Traditional MILP solvers rely on plenty of hand-crafted heuristics during their execution. Neural solvers thus aim to improve these heuristics with deep learning methods (Bengio et al., 2021). Current neural solvers have successfully improved the performance of neural solvers by learning the heuristics in variable selection (branching) (Gasse et al., 2019; Gupta et al., 2020; Nair et al., 2020b; Zarpellon et al., 2021; Scavuzzo et al., 2022; Huang et al., 2023b), node selection (He et al., 2014; Song et al., 2018), cutting plane selection (Tang et al., 2020; Paulus et al., 2022; Turner et al., large neighborhood search (Sun et al., 2020; Wu et al., 2021; Sonnerat et al., 2021; Huang et al., 2023a), diving (Nair et al., 2020b; Yoon, 2022; Han et al., 2023; Paulus & Krause, 2023) and primal heuristics selection (Khalil et al., 2017; Hendel et al., 2019; Chmiela et al., 2021). Our work studies the variable selection heuristic, which receives the most attention in neural solvers. (Khalil et al., 2016; Alvarez et al., 2017; Hansknecht et al., 2018) are the earliest works to use statistical learning for the branching heuristic. They use an imitation learning method to first collect an offline dataset with full strong branching and then treat the learning as either a ranking (Khalil et al., 2016; Hansknecht et al., 2018) or regression problem (Alvarez et al., 2017). With the advent of GNNs, (Gasse et al., 2019) transform each MILP instance into a bipartite graph consisting of variable nodes and constraint nodes and train a GNN classifier to imitate the choice of strong branching. This work lays out the basic model architecture for neural solvers on variable selection. To extend this GNN-based neural solver to larger instances, (Nair et al., 2020b) adopt a more efficient batch Linear Programming solver based on the alternating direction method of multipliers. Furthermore, (Gupta et al., 2020) improves the low efficiency of GNNs by using a hybrid model. In detail, they extract the structural information for each MILP with GNN once at the root node and then use a fast multilayer perceptron to do the classification at each node with the extracted structural information and current node features. Recently, (Scavuzzo et al., 2022) proposed a reinforcement learning approach for learning to branch by formulating B&B as a tree-structured MDP process while (Parsonson et al., 2023) utilize RL to efficiently learn from retrospective trajectories. (Huang et al., 2023b) and (Qu et al., 2022) are the two most similar methods to our work. Although both of the methods are claimed to be offline RL methods, they are actually different from the offline RL algorithms featured in dealing with OOD actions. Namely, they still assume cheap access to a near-optimal expert heuristic without considering a sub-optimal dataset. Therefore, our method is de facto the first work in applying offline RL in learning to branch. 5.2 Offline Reinforcement Learning Offline RL has wide applications in robotic manipulation (Kalashnikov et al., 2018; Mandlekar et al., 2019; Singh et al., 2021; Kalashnikov et al., 2021), text generation (Jaques et al., 2020; Snell et al., 2023), and healthcare (Shortreed et al., 2010; Wang et al., 2018), but it is known to suffer from the distributional shift problem (Kumar et al., 2019; Wu et al., 2019; Jaques et al., 2019; Levine et al., 2020). Existing methods generally tackle this challenge by restricting the policy from generating the OOD actions via an explicit density model (Wu et al., 2019; Fujimoto et al., 2019; Kumar et al., 2019; Ghasemipour et al., 2020), implicit divergence constraint (Peng et al., 2019; Nair et al., 2020a; Wang et al., 2020; Kostrikov et al., 2022; Li et al., 2023), conservative estimation of state-action value (Kumar et al., 2020; Kostrikov et al., 2021; Lyu et al., 2022), or adding a behavior cloning term to the policy improvement objective (Nair et al., 2018; Fujimoto & Gu, 2021). Our model is mostly relevant to the offline RL methods in the first category, but we further tackle the challenge from the dynamic action space and incorporate the unique information from the B&B algorithm. Compared with imitation learning methods such as behavior cloning, offline RL can be more robust to noisy or suboptimal demonstrations (Kumar et al., 2022). Therefore, our proposed offline RL method no longer relies on a near-optimal expert policy as previous neural solvers do and becomes more flexible in the data collection process. 6 Conclusion In this paper, we propose a novel offline RL approach RCAC for neural branching in mixed linear integer programming. RCAC tackles the limitations of previous neural branching algorithms in their dependence on near-optimal human heuristics and the high cost of data collection. It outperforms previous IL-based and RL-based neural branching methods in both branching quality and training efficiency, for both exact solving and time-constrained solving. RCAC thus exhibits a strong potential in generalizing neural MILP solvers to more challenging problems. Future extension of this work includes (1) the combination of both online and offline RL training, (2) the consideration of the multitasking nature of MILP solving, and (3) the generalization of RCAC to other heuristics in MILP solving such as diving and large neighborhood search. REFERENCES Tobias Achterberg. Scip: solving constraint integer programs. *Mathematical Programming Computation*, 1:1–41, 2009. Tobias Achterberg, Thorsten Koch, and Alexander Martin. Branching rules revisited. *Operations Research Letters*, 33(1):42–54, 2005a. ISSN 0167-6377. doi: https://doi.org/10.1016/j.orl.2004.04.002. URL https://www.sciencedirect.com/science/article/pii/S0167637704000501 Tobias Achterberg, Thorsten Koch, and Alexander Martin. Branching rules revisited. *Oper. Res. Lett.*, 33(1):42–54, jan 2005b. ISSN 0167-6377. doi: 10.1016/j.orl.2004.04.002. URL https://doi.org/10.1016/j.orl.2004.04.002 Alejandro Marcos Alvarez, Quentin Louveaux, and Louis Wehenkel. A machine learning-based approximation of strong branching. *INFORMS Journal on Computing*, 29(1):185–195, 2017. doi: 10.1287/ijoc.2016.0723. URL https://doi.org/10.1287/ijoc.2016.0723 Egon Balas and Andrew C. Ho. Set covering algorithms using cutting planes, heuristics, and subgradient optimization: A computational study. 1980. URL https://api.semanticscholar.org/CorpusID:10270974 Yoshua Bengio, Andrea Lodi, and Antoine Prouvost. Machine learning for combinatorial optimization: A methodological tour d’horizon. *European Journal of Operational Research*, 290(2):405–421, 2021. ISSN 0377-2217. doi: https://doi.org/10.1016/j.ejor.2020.07.063. URL https://www.sciencedirect.com/science/article/pii/S0377221720306895 David Bergman, Andre A. Cire, Willem-Jan van Hoeve, and John Hooker. *Decision Diagrams for Optimization*. Springer Publishing Company, Incorporated, 1st edition, 2016. ISBN 3319428470. Dimitri Bertsekas and John Tsitsiklis. *Neuro-Dynamic Programming*, volume 27. 01 1996. doi: 10.1007/978-0-387-74759-0_440 Antonia Chmiela, Elias Boutros Khalil, Ambros M. Gleixner, Andrea Lodi, and Sebastian Pokutta. Learning to schedule heuristics in branch-and-bound. In *Neural Information Processing Systems*, 2021. URL https://api.semanticscholar.org/CorpusID:232270119 Sunil Chopra and Peter Meindl. Strategy, planning, and operation. *Supply Chain Management*, 15(5):71–85, 2001. G. Cornuejols, R. Sridharan, and J.M. Thizy. A comparison of heuristics and relaxations for the capacitated plant location problem. *European Journal of Operational Research*, 50(3):280–297, 1991. ISSN 0377-2217. doi: https://doi.org/10.1016/0377-2217(91)90261-S. URL https://www.sciencedirect.com/science/article/pii/037722179190261S IBM ILOG Cplex. V12. 1: User’s manual for cplex. *International Business Machines Corporation*, 46(53):157, 2009. Paul L. Erdos and Alfréd Rényi. On the evolution of random graphs. *Transactions of the American Mathematical Society*, 286:257–257, 1984. URL https://api.semanticscholar.org/CorpusID:6829589 Andreas T Ernst, Houyuan Jiang, Mohan Krishnamoorthy, and David Sier. Staff scheduling and rostering: A review of applications, methods and models. *European journal of operational research*, 153(1):3–27, 2004. Scott Fujimoto and Shixiang Shane Gu. A minimalist approach to offline reinforcement learning. In *Thirty-Fifth Conference on Neural Information Processing Systems*, 2021. Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. In *International Conference on Machine Learning*, pp. 2052–2062, 2019. Gerald Gamrath, Timo Berthold, and Domenico Salvagnin. An exploratory computational analysis of dual degeneracy in mixed-integer programming. *EURO J. Comput. Optim.*, 8:241–261, 2020. URL https://api.semanticscholar.org/CorpusID:210929321
ZA9XUTseA9
similarly, in the experiment, why is the perturbed 1 norm close to 0 at convergence? It seems the authors are performing early stopping, but that precisely means that implicit regularization is not happening, and that the model overfits.
On the Implicit Bias of Adam Anonymous authors Paper under double-blind review Abstract In previous literature, backward error analysis was used to find ordinary differential equations (ODEs) approximating the gradient descent trajectory. It was found that finite step sizes implicitly regularize solutions because terms appearing in the ODEs penalize the two-norm of the loss gradients. We prove that the existence of similar implicit regularization in RMSProp and Adam depends on their hyperparameters and the training stage, but with a different “norm” involved: the corresponding ODE terms either penalize the (perturbed) one-norm of the loss gradients or, on the contrary, hinder its decrease (the latter case being typical). We also conduct numerical experiments and discuss how the proven facts can influence generalization. 1 Introduction Gradient descent (GD) can be seen as a numerical method solving the ordinary differential equation (ODE) $\dot{\theta} = -\nabla E(\theta)$, where $E(\cdot)$ is the loss function and $\nabla E(\theta)$ is its gradient. Starting at $\theta^{(0)}$, it creates a sequence of guesses $\theta^{(1)}, \theta^{(2)}, \ldots$, which lie close to the solution trajectory $\theta(t)$ governed by aforementioned ODE. Since the step size $h$ is finite, one could search for a modified differential equation $\dot{\tilde{\theta}} = -\nabla \tilde{E}(\tilde{\theta})$ such that $\theta^{(n)} - \tilde{\theta}(nh)$ is exactly zero, or at least closer to zero than $\theta^{(n)} - \theta(nh)$, that is, all the guesses of the descent lie exactly on the new solution curve or closer compared to the original curve. This approach to analysing properties of a numerical method is called backward error analysis in the numerical integration literature (see Chapter IX in Ernst Hairer & Wanner (2006)). Barrett & Dherin (2021) first used this idea for full-batch GD and found that the modified loss function $\tilde{E}(\theta) = E(\theta) + (h/4)\|\nabla E(\theta)\|^2$ makes the trajectory of the solution to $\dot{\tilde{\theta}} = -\nabla \tilde{E}(\tilde{\theta})$ approximate the sequence $\{\theta^{(n)}\}_{n=0}^{\infty}$ one order of $h$ better than the original ODE, where $\|\cdot\|$ is the Euclidean norm. In related work, Miyagawa (2022) obtained the correction term for full-batch GD up to any chosen order, also studying the global error (uniform in the iteration number) as opposed to the local (one-step) error. The analysis was later extended to mini-batch GD in Smith et al. (2021). Assume that the training set is split in batches of size $B$ and there are $m$ batches per epoch (so the training set size is $mB$), the cost function is rewritten as $E(\theta) = (1/m)\sum_{k=0}^{m-1} \hat{E}_k(\theta)$ with mini-batch costs denoted $\hat{E}_k(\theta) = (1/B)\sum_{j=kB+1}^{kB+B} E_j(\theta)$. It was obtained in that work that after one epoch, the mean iterate of the algorithm, averaged over all possible shuffles of the batch indices, is close to the solution to $\dot{\theta} = -\nabla \tilde{E}_{SGD}(\theta)$, where the modified loss is given by $\tilde{E}_{SGD}(\theta) = E(\theta) + h/(4m) \cdot \sum_{k=0}^{m-1} \|\nabla \hat{E}_k(\theta)\|^2$. More recently, Ghosh et al. (2023) studied GD with heavy-ball momentum $\theta^{(n+1)} = \theta^{(n)} - h\nabla E(\theta^{(n)}) + \beta(\theta^{(n)} - \theta^{(n-1)})$, where $\beta$ is the momentum parameter. In the full-batch setting, they proved that for $n$ large enough it is close to the continuous trajectory solving $$\dot{\theta} = (1 - \beta)^{-1}\nabla E(\theta) + h(1 + \beta)(1 - \beta)^{-3}\nabla \|\nabla E(\theta)\|^2/4.$$ Their main theorem also provides the analysis for the general mini-batch case. In another recent work, Zhao et al. (2022) introduce a regularization term $\lambda \cdot \|\nabla E(\theta)\|$ to the loss function as a way to ensure finding flatter minima, improving generalization. The only difference between their term and the first-order correction coming from backward error analysis (up to a coefficient) is that the norm is not squared and regularization is applied on a per-batch basis. Using backward error analysis to approximate the discrete dynamics with a modified ODE for adaptive algorithms such as RMSProp (Tieleman et al., 2012) and Adam (Kingma & Ba, 2015) is currently missing in the literature. Barrett & Dherin (2021) note that “it would be interesting to use backward error analysis to calculate the modified loss and implicit regularization for other widely used optimizers such as momentum, Adam and RMSprop”. Smith et al. (2021) reiterate that they “anticipate that backward error analysis could also be used to clarify the role of finite learning rates in adaptive optimizers like Adam”. Ghosh et al. (2023) agree that “RMSProp ... and Adam ..., albeit being powerful alternatives to SGD with faster convergence rates, are far from well-understood in the aspect of implicit regularization”. In a similar context, in Appendix G to Miyagawa (2022), it is mentioned that “its [Adam’s] counter term and discretization error are open questions”. This work fills the gap by conducting backward error analysis for (mini-batch, and full-batch as a special case) Adam and RMSProp. Our main contributions are listed below. - In Theorem 3.1 we provide a global second-order in $h$ continuous ODE approximation to Adam in the general mini-batch setting. (A similar result for RMSProp is moved to the supplemental appendix.) For the full-batch special case, it was shown in prior work Ma et al. (2022) that the continuous-time limit of both these algorithms is a (perturbed by the numerical stability parameter $\varepsilon$) signGD flow $\dot{\theta} = -\nabla E(\theta)/(|\nabla E(\theta)| + \varepsilon)$ component-wise; we make this more precise by finding a linear in $h$ “bias” term on the right. - We analyze the full-batch case in more detail: see the summary in Section 2. We find that the bias term does something different from penalizing the two-norm of the loss gradient as in the case of GD: it either penalizes the perturbed one-norm of the loss gradient, defined as $\|v\|_{1,\varepsilon} = \sum_{i=1}^{p} \sqrt{v_i^2 + \varepsilon}$, or, on the contrary, hinders its decrease (depending on hyperparameters and the training stage). Example 2.1 provides a backward error analysis result for heavy-ball momentum GD (Ghosh et al., 2023) as a special case. - We provide numerical evidence consistent with our results. In particular, we observe that often penalizing the perturbed one-norm appears to improve generalization, and hindering the norm’s decrease does the opposite. The bias we identify typically acts as anti-regularization, which is a previously unidentified possible explanation for often reported poorer generalization of adaptive gradient algorithms compared to other methods. Related work Backward error analysis of first-order methods. We provide the history of finding ODEs approximating different algorithms above in the introduction. Recently, there have been other applications of backward error analysis related to machine learning. Kunn et al. (2020) show that the approximating continuous-time trajectories satisfy conservation laws that are broken in discrete time. França et al. (2021) use backward error analysis while studying how to discretize continuous-time dynamical systems preserving stability and convergence rates. Rosca et al. (2021) find continuous-time approximations of discrete two-player differential games. Approximating gradient methods by differential equation trajectories. Ma et al. (2022) prove that the trajectories of Adam and RMSProp are close to signGD dynamics, and investigate different training regimes of these algorithms empirically. SGD is approximated by stochastic differential equations and novel adaptive parameter adjustment policies are devised in Li et al. (2017). Implicit bias of first-order methods. Soudry et al. (2018) prove that GD trained to classify linearly separable data with logistic loss converges to the direction of the max-margin vector (the solution to the hard margin SVM). This result has been extended to different loss functions in Nacson et al. (2019b), to SGD in Nacson et al. (2019c) and more generic optimization methods in [Gunasekar et al., 2018a], to the nonseparable case in [Ji & Telgarsky, 2018b], [Ji & Telgarsky, 2019]. This line of research has been generalized to studying implicit biases of linear networks ([Ji & Telgarsky, 2018a], [Gunasekar et al., 2018b]), homogeneous neural networks ([Ji & Telgarsky, 2020], [Nacson et al., 2019a], [Lyu & Li, 2019], [Woodworth et al., 2020]) study the gradient flow of a diagonal linear network with squared loss and show that large initializations lead to minimum two-norm solutions while small initializations lead to minimum one-norm solutions. [Even et al., 2023] extend this work to the case of non-zero step sizes and mini-batch training. [Wang et al., 2021] prove that Adam and RMSProp maximize the margin of homogeneous neural networks. **Generalization of adaptive methods.** [Cohen et al., 2022] investigate the edge-of-stability regime of adaptive gradient algorithms and the effect of sharpness (the largest eigenvalue of the hessian) on generalization; [Granziol, 2020]; [Chen et al., 2021] observe that adaptive methods find sharper minima than SGD and [Zhou et al., 2020]; [Xie et al., 2022] argue theoretically that it is the case. [Jiang et al., 2022] introduce a statistic that measures the uniformity of the hessian diagonal and argue that adaptive gradient algorithms are biased towards making this statistic smaller. [Keskar & Socher, 2017] propose to improve generalization of adaptive methods by switching to SGD in the middle of training. **Notation** We denote the loss of the $k$th minibatch as a function of the network parameters $\theta \in \mathbb{R}^p$ by $E_k(\theta)$, and in the full-batch setting we omit the index and write $E(\theta)$. $\nabla E$ means the gradient of $E$, and $\nabla$ with indices denotes partial derivatives, e.g. $\nabla_{ij}sE$ is a shortcut for $\frac{\partial^2 E}{\partial \theta_i \partial \theta_j s}$. The norm without indices $\|\cdot\|$ is the two-norm of a vector, $\|\cdot\|_1$ is the one-norm and $\|\cdot\|_{1,\varepsilon}$ is the perturbed one-norm defined as $\|v\|_{1,\varepsilon} = \sum_{i=1}^{p} \sqrt{v_i^2 + \varepsilon}$. (Of course, if $\varepsilon > 0$ the perturbed one-norm is not a norm, but $\varepsilon = 0$ makes it the one-norm.) To provide the names and notations for hyperparameters, we define the algorithm below. **Definition 1.1.** The Adam algorithm is an optimization algorithm with numerical stability hyperparameter $\varepsilon > 0$, squared gradient momentum hyperparameter $\rho \in (0,1)$, gradient momentum hyperparameter $\beta \in (0,1)$, initialization $\theta^{(0)} \in \mathbb{R}^p$, $\nu^{(0)} = 0 \in \mathbb{R}^p$, $m^{(0)} = 0 \in \mathbb{R}^p$ and the following update rule: for each $n \geq 0$, $j \in \{1, \ldots, p\}$ $$ \nu_j^{(n+1)} = \rho \nu_j^{(n)} + (1 - \rho) (\nabla_j E_n(\theta^{(n)}))^2, \quad m_j^{(n+1)} = \beta m_j^{(n)} + (1 - \beta) \nabla_j E_n(\theta^{(n)}), \\ \theta_j^{(n+1)} = \theta_j^{(n)} - h \left[ \nu_j^{(n+1)} / (1 - \rho^{n+1}) + \varepsilon \right]^{-1/2} \left[ m_j^{(n+1)} / (1 - \beta^{n+1}) \right]. $$ **Remark 1.2.** Note that the numerical stability hyperparameter $\varepsilon > 0$, which is introduced in these algorithms to avoid division by zero, is inside the square root in our definition. This way we avoid division by zero in the derivative too: the first derivative of $x \mapsto (\sqrt{x + \varepsilon})^{-1}$ is bounded for $x \geq 0$. This is useful for our analysis. In Theorems SA-2.4 and SA-4.4 in the appendix, the original versions of RMSProp and Adam are also tackled, though with an additional assumption which requires that no component of the gradient can come very close to zero in the region of interest. This is true only for the initial period of learning (whereas Theorem 3.1 tackles the whole period). Practitioners do not seem to make a distinction between the version with $\varepsilon$ inside vs. outside the square root: tutorials with both versions abound on machine learning related websites. Moreover, the popular Tensorflow variant of RMSProp has $\varepsilon$ inside the square root even though in the documentation [Kingma & Ba, 2015] is cited, where $\varepsilon$ is outside. Empirically we also observed that moving $\varepsilon$ inside or outside the square root does not change the behavior of Adam or RMSProp qualitatively. https://github.com/keras-team/keras/blob/f9336cc5114b4a9429a242deb264b707379646b7/keras/optimizers/rmsprop.py#L190 https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/experimental/RMSprop 2 IMPLICIT BIAS OF FULL-BATCH ADAM: AN INFORMAL SUMMARY We are ready to informally describe our theoretical result (in the full-batch special case). Assume $E(\theta)$ is the loss, whose partial derivatives up to the fourth order are bounded. Let $\{\theta^{(n)}\}$ be iterations of Adam as defined in Definition 1.1. We find an ODE whose solution trajectory $\tilde{\theta}(t)$ is $h^2$-close to $\{\theta^{(n)}\}$, meaning that for any time horizon $T > 0$ there is a constant $C$ such that for any step size $h \in (0, T)$ we have $\|\tilde{\theta}(nh) - \theta^{(n)}\| \leq Ch^2$ (for $n$ between 0 and $\lfloor T/h \rfloor$). The ODE is written the following way (up to terms that rapidly go to zero as $n$ grows): for the component number $j \in \{1, \ldots, p\}$ $$\dot{\tilde{\theta}}_j(t) = -\left(\|\nabla_j E(\tilde{\theta}(t))\|^2 + \varepsilon\right)^{-1/2}(\nabla_j E(\tilde{\theta}(t)) + \text{bias})$$ (3) with initial conditions $\tilde{\theta}_j(0) = \theta_j^{(0)}$ for all $j$, where the bias term is $$\text{bias} := \frac{h}{2}\left\{\frac{1 + \beta}{1 - \beta} - \frac{1 + \rho}{1 - \rho}\right\}\nabla_j\|\nabla E(\tilde{\theta}(t))\|_{1,\varepsilon}.$$ (4) Depending on hyperparameters and the training stage, the bias term can take two extreme forms listed below, and during most of the training the reality is usually in between. • If $\sqrt{\varepsilon}$ is small compared to all components of $\nabla E(\tilde{\theta}(t))$, i.e. $\min_j |\nabla_j E(\tilde{\theta}(t))| \gg \sqrt{\varepsilon}$, which is the case during the initial learning stage, then $$\text{bias} = \frac{(h/2)\left\{(1 + \beta)/(1 - \beta) - (1 + \rho)/(1 - \rho)\right\}\nabla_j\|\nabla E(\tilde{\theta}(t))\|_{1,\varepsilon}}{\|\nabla_j E(\tilde{\theta}(t))\|}.$$ (5) For small $\varepsilon$, the perturbed one-norm is indistinguishable from the usual one-norm, and for $\beta > \rho$ it is penalized (in much the same way as the squared two-norm is implicitly penalized in the case of GD), but for $\rho > \beta$ its decrease is actually hindered by this term (so the bias is opposite to penalization). The ODE in (3) approximately becomes $$\dot{\tilde{\theta}}_j(t) = -\frac{\nabla_j E(\tilde{\theta}(t))}{\|\nabla_j E(\tilde{\theta}(t))\|}, \quad \tilde{E}(\theta) = E(\theta) + \frac{h}{2}\left\{\frac{1 + \beta}{1 - \beta} - \frac{1 + \rho}{1 - \rho}\right\}\|\nabla E(\theta)\|_{1,\varepsilon}.$$ (6) • If $\sqrt{\varepsilon}$ is large compared to all gradient components, i.e. $\max_j |\nabla_j E(\tilde{\theta}(t))| \ll \sqrt{\varepsilon}$, which may happen during the later learning stage, the fraction in (4) with $\varepsilon$ in the numerator approaches one, the dependence on $\rho$ cancels out, and $$\|\nabla E(\tilde{\theta}(t))\|_{1,\varepsilon} \approx \sum_{i=1}^p \sqrt{\varepsilon}(1 + |\nabla_i E(\tilde{\theta}(t))|^2/(2\varepsilon)) = p\sqrt{\varepsilon} + \frac{1}{2\sqrt{\varepsilon}}\|\nabla E(\tilde{\theta}(t))\|^2.$$ (7) In other words, $\|\cdot\|_{1,\varepsilon}$ becomes $\|\cdot\|^2/(2\sqrt{\varepsilon})$ up to an additive constant, giving $$\text{bias} = \left(4\sqrt{\varepsilon}\right)^{-1}(1 - \beta)^{-1}(1 + \beta)\nabla_j\|\nabla E(\tilde{\theta}(t))\|^2.$$ The form of the ODE in this case is $$\dot{\tilde{\theta}}_j(t) = -\nabla_j E(\tilde{\theta}(t)), \quad \tilde{E}(\theta) = \frac{1}{\sqrt{\varepsilon}}\left(E(\tilde{\theta}(t)) + \frac{h}{4\sqrt{\varepsilon}}\frac{1 + \beta}{1 - \beta}\|\nabla E(\tilde{\theta}(t))\|^2\right).$$ (8) These two extreme cases are summarized in Table 1. In Figure 1 we use the one-dimensional ($p = 1$) case to illustrate what kind of term is being implicitly penalized. Since in practice $\varepsilon$ is usually small, during most of the training Adam is better described by the first extreme case. It is clear from (6) that, if $\rho > \beta$, this bias term does not provide the same kind of implicit regularization as the correction term in (1) does. In fact, it provides the opposite of regularization. This phenomenon may partially explain why adaptive gradient methods have been reported to generalize worse than non-adaptive ones (Cohen et al., 2022 and references therein), and it may be a previously unknown perspective on why they are biased towards higher-curvature regions and find “sharper” minima. Moreover, (6) suggests that decreasing $\rho$ and increasing $\beta$ moves the trajectory towards regions with lower “norm”, | $\varepsilon$ “small” | $\varepsilon$ “large” | |----------------------|----------------------| | $\beta \geq \rho$ | $\|\nabla E(\theta)\|_1$-penalized | | $\rho > \beta$ | $-\|\nabla E(\theta)\|_1$-penalized | Table 1: Implicit bias of Adam: special cases. “Small” and “large” are in relation to squared gradient components (Adam in the latter case is close to GD with momentum). Figure 1: The graphs of $x \mapsto \int_0^x \left\{ \frac{1+\beta}{1-\beta} - \frac{1+\rho}{1-\rho} + \frac{1+\rho}{1-\rho} \cdot \frac{\varepsilon}{y^2 + \varepsilon} \right\} d\sqrt{\varepsilon + y^2}$ with $\beta = 0.95$. which may improve the test error. We stress that the link between (anti-)penalizing the one-norm and sharpness of minima is speculative, and even the connection between sharpness and generalization is not clear-cut (Andriushchenko et al., 2023). This overview also applies to RMSProp by setting $\beta = 0$. See Theorem SA-3.4 in the appendix for the formal result. Example 2.1 (Backward Error Analysis for GD with Heavy-ball Momentum). Assume $\varepsilon$ is large compared to all squared gradient components during the whole training process, so that the form of the ODE is approximated by (8). Since Adam with a large $\varepsilon$ and after a certain number of iterations approximates SGD with heavy-ball momentum with step size $h(1 - \beta)/\sqrt{\varepsilon}$, a linear step size change (and corresponding time change) gives exactly the equations in Theorem 4.1 of Ghosh et al. (2023). Taking $\beta = 0$ (no momentum), we get the implicit regularization of GD from Barrett & Dherin (2021). 3 Main result: ODE approximating mini-batch Adam We only make one assumption, which is standard in the literature: the loss $E_k$ for each mini-batch is 4 times continuously differentiable, and partial derivatives of $E_k$ up to order 4 are bounded, i.e. there is a positive constant $M$ such that for $\theta$ in the region of interest $$\sup_k \left\{ \sup_i |\nabla_i E_k(\theta)| \lor \sup_{i,j} |\nabla_{ij} E_k(\theta)| \lor \sup_{i,j,s} |\nabla_{ij}s E_k(\theta)| \lor \sup_{i,j,s,r} |\nabla_{ij,sr} E_k(\theta)| \right\} \leq M.$$ (9) Theorem 3.1. Assume (9) holds. Let $\{\theta^{(n)}\}$ be iterations of Adam as defined in Definition 1.1. $\tilde{\theta}(t)$ be the continuous solution to the piecewise ODE $$\dot{\tilde{\theta}}_j(t) = -\frac{M_j^{(n)}(\tilde{\theta}(t))}{R_j^{(n)}(\tilde{\theta}(t))}$$ $$+ h \left( \frac{M_j^{(n)}(\tilde{\theta}(t))(2P_j^{(n)}(\tilde{\theta}(t)) + \bar{P}_j^{(n)}(\tilde{\theta}(t)))}{2R_j^{(n)}(\tilde{\theta}(t))^3} - \frac{2L_j^{(n)}(\tilde{\theta}(t)) + \bar{L}_j^{(n)}(\tilde{\theta}(t))}{2R_j^{(n)}(\tilde{\theta}(t))} \right)$$ (10) for \( t \in [nh, (n+1)h] \) with the initial condition \( \tilde{\theta}(0) = \theta^{(0)} \), where \[ R_j^n(\theta) := \left( (1 - \rho^{n+1})^{-1} \sum_{k=0}^n \rho^{n-k} (1 - \rho) (\nabla_j E_k(\theta))^2 + \varepsilon \right)^{1/2}, \] \[ M_j^{(n)}(\theta) := (1 - \beta^{n+1})^{-1} \sum_{k=0}^n \beta^{n-k} (1 - \beta) \nabla_j E_k(\theta), \] \[ L_j^{(n)}(\theta) := (1 - \beta^{n+1})^{-1} \sum_{k=0}^n \beta^{n-k} (1 - \beta) \sum_{i=1}^p \nabla_i E_k(\theta) \sum_{l=k}^{n-1} M_i^{(l)}(\theta)/R_i^{(l)}(\theta), \] \[ \bar{L}_j^{(n)}(\theta) := (1 - \beta^{n+1})^{-1} \sum_{k=0}^n \beta^{n-k} (1 - \beta) \sum_{i=1}^p \nabla_i E_k(\theta) M_i^{(n)}(\theta)/R_i^{(n)}(\theta), \] \[ P_j^{(n)}(\theta) := (1 - \rho^{n+1})^{-1} \sum_{k=0}^n \rho^{n-k} (1 - \rho) \nabla_j E_k(\theta) \sum_{i=1}^p \nabla_i E_k(\theta) \sum_{l=k}^{n-1} M_i^{(l)}(\theta)/R_i^{(l)}(\theta), \] \[ \bar{P}_j^{(n)}(\theta) := (1 - \rho^{n+1})^{-1} \sum_{k=0}^n \rho^{n-k} (1 - \rho) \nabla_j E_k(\theta) \sum_{i=1}^p \nabla_i E_k(\theta) M_i^{(n)}(\theta)/R_i^{(n)}(\theta). \] Then, for any fixed positive time horizon \( T > 0 \) there exists a constant \( C \) such that for any step size \( h \in (0, T) \) we have \( \| \tilde{\theta}(nh) - \theta^{(n)} \| \leq Ch^2 \) for \( n \in \{0, \ldots, \lfloor T/h \rfloor \} \). The proof is in the appendix (this is Theorem SA-5.4; see SA-1 for the overview of the contents). To help the reader understand the argument, apart from the full proof, we include an informal derivation in Section SA-9 of the appendix, and we provide an even briefer sketch of this derivation here. Our goal is to find such a sequence \( \tilde{\theta}(t_n) \), where \( t_n := nh \), that \( \tilde{\theta}(t_{n+1}) = \tilde{\theta}(t_n) - hT_\beta T_\rho^{-1/2} + O(h^3) \), denoting \( T_\beta := (1 - \beta^{n+1})^{-1} \sum_{k=0}^n \beta^{n-k} (1 - \beta) \nabla_j E_k(\tilde{\theta}(t_k)) \) and \( T_\rho := (1 - \rho^{n+1})^{-1} \sum_{k=0}^n \rho^{n-k} (1 - \rho) (\nabla_j E_k(\tilde{\theta}(t_k)))^2 + \varepsilon \). Ignoring the terms of order higher than one, we can take a first-order approximation for granted: \( \tilde{\theta}(t_{n+1}) = \tilde{\theta}(t_n) - hA(\tilde{\theta}(t_n)) + O(h^2) \) with \( A(\theta) := M_j^{(n)}(\theta)/R_j^{(n)}(\theta) \). The challenge is to make this more precise by finding an equality of the form \( \tilde{\theta}(t_{n+1}) = \tilde{\theta}(t_n) - hA(\tilde{\theta}(t_n)) + h^2 B(\tilde{\theta}(t_n)) + O(h^3) \), where \( B(\cdot) \) is a known function, because this is a numerical iteration to which standard backward error analysis (Chapter IX in Ernst Hairer & Wanner (2006)) can be applied. Using the Taylor series, we can write \[ \nabla_j E_k(\tilde{\theta}(t_{n-1})) = \nabla_j E_k(\tilde{\theta}(t_n)) + \sum_{i=1}^p \nabla_i E_k(\tilde{\theta}(t_n)) \{ \tilde{\theta}_i(t_{n-1}) - \tilde{\theta}_i(t_n) \} + O(h^2) \] \[ = \nabla_j E_k(\tilde{\theta}(t_n)) + h \sum_{i=1}^p \nabla_i E_k(\tilde{\theta}(t_n)) M_i^{(n-1)}(\tilde{\theta}(t_n))/R_i^{(n-1)}(\tilde{\theta}(t_n)) + O(h^2), \] where in the last equality we just replaced \( t_{n-1} \) with \( t_n \) in the \( h \)-term since it only affects higher-order terms. Doing this again for steps \( n-1, n-2 \) and so on, and adding the resulting equations, will give for \( k < n \) \[ \nabla_j E_k(\tilde{\theta}(t_k)) = \nabla_j E_k(\tilde{\theta}(t_n)) + h \sum_{i=1}^p \nabla_i E_k(\tilde{\theta}(t_n)) \sum_{l=k}^{n-1} M_i^{(l)}(\tilde{\theta}(t_n))/R_i^{(l)}(\tilde{\theta}(t_n)) + O(h^2), \] where we could ignore that \( n-k \) is not bounded because of exponential averaging. Taking the square of this formal power series (in \( h \)), summing up over \( k \), and using the expression for the inverse square root of a formal power series \( \sum_{r=0}^\infty a_r h^r \), gives us an expansions of \( T_\rho^{-1/2} \), and a similar process provides an expansion for \( T_\beta \). Combining them leads to an expression for \( B(\cdot) \). Remark 3.2. In the full-batch setting \( E_k \equiv E \), the terms in Theorem 3.1 simplify to \[ R_j^{(n)}(\theta) = (|\nabla_j E(\theta)|^2 + \varepsilon)^{1/2}, \quad M_j^{(n)}(\theta) = \nabla_j E(\theta), \] \[ L_j^{(n)}(\theta) = \left[ \frac{\beta}{1 - \beta} - \frac{(n+1)\beta^{n+1}}{1 - \beta^{n+1}} \right] \bar{L}_j^{(n)}(\theta), \quad \bar{L}_j^{(n)}(\theta) = \nabla_j \|\nabla E(\theta)\|_{1,\varepsilon}, \] \[ P_j^{(n)}(\theta) = \left[ \frac{\rho}{1 - \rho} - \frac{(n+1)\rho^{n+1}}{1 - \rho^{n+1}} \right] \bar{P}_j^{(n)}(\theta), \quad \bar{P}_j^{(n)}(\theta) = \nabla_j E(\theta) \nabla_j \|\nabla E(\theta)\|_{1,\varepsilon}. \] If the iteration number \( n \) is large, (10) rapidly becomes as described in [3] and [4]. 4 ILLUSTRATION: SIMPLE BILINEAR MODEL We now analyze the effect of the first-order term for Adam in the same model as Barrett & Dherin (2021) and Ghosh et al. (2023) have studied. Namely, assume the parameter \( \theta = (\theta_1, \theta_2) \) is 2-dimensional, and the loss is given by \( E(\theta) := 1/2(3/2 - 2\theta_1\theta_2)^2 \). The loss is minimized on the hyperbola \( \theta_1\theta_2 = 3/4 \). We graph the trajectories of Adam in this case: Figure 2 shows that increasing \( \beta \) forces the trajectory to the region with smaller \( \|\nabla E(\theta)\|_1 \), and increasing \( \rho \) does the opposite. Figure 3 shows that increasing the learning rate moves Adam towards the region with smaller \( \|\nabla E(\theta)\|_1 \) if \( \beta > \rho \) (just like in the case of GD, except the norm is different if \( \varepsilon \) is small compared to gradient components), and does the opposite if \( \rho > \beta \). All these observations are exactly what Theorem 3.1 predicts. Figure 2: Increasing \( \beta \) moves the trajectory of Adam towards the regions with smaller one-norm of the gradient (if \( \varepsilon \) is sufficiently small); increasing \( \rho \) does the opposite. The cross denotes the limit point of gradient one-norm minimizers on the level sets \( 4\theta_1\theta_2 - 3 = c \). All Adam trajectories start at (2.8, 3.5). Figure 3: The setting is the same as in Figure 2. Increasing the learning rate moves the Adam trajectory towards the regions with smaller one-norm of the gradient if \( \beta \) is significantly larger than \( \rho \) and does the opposite if \( \rho \) is larger than \( \beta \). 5 Numerical experiments We offer some preliminary empirical evidence of how the bias term shows up in deep neural networks. Ma et al. (2022) divides training regimes of Adam into three categories: the spike regime when $\rho$ is much larger than $\beta$, in which the training loss curve contains very large spikes and the training is obviously unstable; the (stable) oscillation regime when $\rho$ is sufficiently close to $\beta$, in which the loss curve contains fast and small oscillations; the divergence regime when $\beta$ is much larger than $\rho$, in which Adam diverges. We exclude the last regime. In the spike regime, the loss spikes to large values at irregular intervals. This has been observed in the context of large transformers, and mitigation strategies have been proposed in Chowdhery et al. (2022) and Molybog et al. (2023). Since it is unlikely that an unstable Adam trajectory can be meaningfully approximated by a smooth ODE solution, we exclude the spike regime as well, and only consider the oscillation regime, which Ma et al. (2022) recommend to use in practice. We do this by making $\beta$ and $\rho$ not too far apart, because for clean experiments we do not do any explicit regularization, learning rate decay or stochastic batching, and decreasing $h$ increases training time and weakens the bias we identify. We train Resnet-50 on the CIFAR-10 dataset with full-batch Adam. Figure 4 shows that in the stable oscillation regime increasing $\rho$ seems to increase the perturbed one-norm (consistent with our analysis: the smaller $\rho$, the more this “norm” is penalized) and decrease the test accuracy. The opposite to the latter was noticed in Cohen et al. (2022), which we think is the case in the spike regime (see above). Figure 5 shows that increasing $\beta$ seems to decrease the perturbed one-norm (consistent with our analysis: the larger $\beta$, the more this norm is penalized) and increase the test accuracy. The picture confirms the finding in Ghosh et al. (2023) (for the momentum parameter in momentum GD). ![Figure 4](image) Figure 4: Resnet-50 on CIFAR-10 trained with full-batch Adam, $\varepsilon = 10^{-8}$, $\beta = 0.99$. As $\rho$ increases, the norm seems to rise and the test accuracy seems to fall (in the stable regime of training). The test accuracies plotted here are maximal after more than 3600 epochs. The perturbed norms are also maximal after excluding the initial training period (i.e., the plotted “norms” are at peaks of the “hills” described in Section 5). Additional evidence and more details are provided in Section SA-8 of the Appendix. We obtain a more detailed picture of the perturbed norm’s behavior by training Resnet-101 on CIFAR-10 and CIFAR-100 with full-batch Adam. Figure 6 shows the graphs of $\|\nabla E\|_{1,\varepsilon}$ as functions of the epoch number. The “norm” decreases, then rises again, and then decreases further until it flatlines. Throughout most of the training, the larger $\beta$ the smaller the “norm”. The “hills” of the “norm” curves are higher with smaller $\beta$ and larger $\rho$. This is consistent with our analysis because the larger $\rho$ compared to $\beta$, the more $\|\nabla E\|_{1,\varepsilon}$ is prevented from falling by the bias term. Note that the perturbed one-norm cannot be near-zero at the end of training because it is bounded from below by $p\sqrt{\varepsilon}$. Figure 5: Resnet-50 on CIFAR-10 trained with full-batch Adam, $\rho = 0.999$, $\varepsilon = 10^{-8}$. The perturbed one-norm seems to fall as $\beta$ increases, and the test accuracy seems to rise. Both metrics are calculated as in Figure 4. Figure 6: Plots of $\|\nabla E\|_{1,\varepsilon}$ after each epoch for a full-batch Adam, $h = 10^{-4}, \varepsilon = 10^{-8}$. Left: Resnet-101 on CIFAR-10, $\rho = 0.999$. Right: Resnet-101 on CIFAR-100, $\beta = 0.97$. 6 FUTURE DIRECTIONS As far as we know, the assumption similar to (9) is explicitly or implicitly present in all previous work on backward error analysis of gradient-based machine learning algorithms. There is evidence that large-batch algorithms often operate at the edge of stability (Cohen et al., 2021; 2022), in which the largest eigenvalue of the hessian can be large, making it unclear whether the higher-order partial derivatives can safely be assumed bounded near optimality. However, as Smith et al. (2021) point out, in the mini-batch setting backward error analysis can be more accurate. We leave a qualitative analysis of the behavior of first-order terms in Theorem 3.1 in the mini-batch case as a future direction. Relatedly, Adam is known to not always generalize worse than SGD: for transformers, Adam often outperforms (Zhang et al., 2020; Kumar et al., 2022). Moreover, for NLP tasks we may spend a long time training close to an interpolating solution. Though our analysis suggests that in the latter regime the anti-regularization effect disappears, more work is needed to connect the implicit bias to the training dynamics of transformers. Also, the constant $C$ in Theorem 3.1 goes to infinity as $\varepsilon$ goes to zero. Theoretically, our proof does not exclude the case where for very small $\varepsilon$ the trajectory of the piecewise ODE is only close to the Adam trajectory for small, suboptimal learning rates, at least at later stages of learning. (For the initial learning period, this is not a problem.) It appears to also be true of Proposition 1 in Ma et al. (2022) (zeroth-order approximation by sign-GD). This is especially noticeable in the large-spike regime of training (see Section 5), which, despite being obviously unstable, can still lead to acceptable test errors. It would be interesting to investigate this regime in detail. REPRODUCIBILITY STATEMENT Detailed proofs to the theoretical claims in the paper are available in the appendix and are referenced in the main text. All hyperparameters used for experiments are given either in figure captions or the pictures illustrating our empirical results, and the details about our model architectures and training approaches are available in the appendix (Section SA-8). REFERENCES Maksym Andriushcheuko, Francesco Croce, Maximilian Müller, Matthias Hein, and Nicolas Flammarion. A modern look at the relationship between sharpness and generalization. In Proceedings of the 40th International Conference on Machine Learning, ICML’23. JMLR.org, 2023. David Barrett and Benoit Dherin. Implicit gradient regularization. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=3q5IqUrkcF. Xiangning Chen, Cho-Jui Hsieh, and Boqing Gong. When vision transformers outperform resnets without pre-training or strong data augmentations. arXiv preprint arXiv:2106.01548, 2021. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Jeremy Cohen, Simran Kaur, Yuanzhi Li, J Zico Kolter, and Ameet Talwalkar. Gradient descent on neural networks typically occurs at the edge of stability. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=jh-rTtvkGeM. Jeremy M Cohen, Behrooz Ghorbani, Shankar Krishnan, Naman Agarwal, Sourabh Medapati, Michal Badura, Daniel Suo, David Cardoze, Zachary Nado, George E Dahl, et al. Adaptive gradient methods at the edge of stability. arXiv preprint arXiv:2207.14484, 2022. Christian Lubich Ernst Hairer and Gerhard Wanner. Geometric numerical integration. Springer-Verlag, Berlin, 2 edition, 2006. ISBN 3-540-30663-3. Mathieu Even, Scott Pesme, Suriya Gunasekar, and Nicolas Flammarion. (s) gd over diagonal linear networks: Implicit regularisation, large stepsizes and edge of stability. arXiv preprint arXiv:2302.08982, 2023. Guilherme França, Michael I Jordan, and René Vidal. On dissipative symplectic integration with applications to gradient-based optimization. Journal of Statistical Mechanics: Theory and Experiment, 2021(4):043402, 2021. Avrajit Ghosh, He Lyu, Xitong Zhang, and Rongrong Wang. Implicit regularization in heavy-ball momentum accelerated stochastic gradient descent. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=ZzdBhtEH9yE. Diego Granziol. Flatness is a false friend. arXiv preprint arXiv:2006.09091, 2020. Suriya Gunasekar, Jason Lee, Daniel Soudry, and Nathan Srebro. Characterizing implicit bias in terms of optimization geometry. In International Conference on Machine Learning, pp. 1832–1841. PMLR, 2018a. Suriya Gunasekar, Jason D Lee, Daniel Soudry, and Nati Srebro. Implicit bias of gradient descent on linear convolutional networks. Advances in neural information processing systems, 31, 2018b.
x1ptaXpOYa
Could you provide a more detailed breakdown of the types and sources of documents included in the ADoPD dataset? Understanding the diversity in terms of document genres, geographical origins, and linguistic variations would offer more insight into its applicability and robustness.
ADOPD: A LARGE-SCALE DOCUMENT PAGE DECOMPOSITION DATASET Jiuxiang Gu\textsuperscript{1}\textsuperscript{*} Xiangxi Shi\textsuperscript{2} Jason Kuen\textsuperscript{1} Lu Qi\textsuperscript{3} Ruiyi Zhang\textsuperscript{1} Anqi Liu\textsuperscript{4} Ani Nenkova\textsuperscript{1} Tong Sun\textsuperscript{1} \textsuperscript{1}Adobe Research \textsuperscript{2}Oregon State University \textsuperscript{3}UC, Merced \textsuperscript{4}Johns Hopkins University Figure 1: Overview of the ADOPD dataset showcasing densely annotated images of various document types and layouts. Each column presents the original image alongside visual entity masks and annotations of text bounding boxes, organized from top to bottom. ABSTRACT Research in document image understanding is hindered by limited high-quality document data. To address this, we introduce ADOPD, a comprehensive dataset for document page decomposition. ADOPD stands out with its data-driven approach for document taxonomy discovery during data collection, complemented by dense annotations. Our approach integrates large-scale pretrained models with a human-in-the-loop process to guarantee diversity and balance in the resulting data collection. Leveraging our data-driven document taxonomy, we collect and densely annotate document images, addressing four document image understanding tasks: Doc2Mask, Doc2Box, Doc2Tag, and Doc2Seq. Specifically, for each image, the annotations include human-labeled entity masks, text bounding boxes, as well as automatically generated tags and captions that have been manually cleaned. We conduct comprehensive experimental analyses to validate our data and assess the four tasks using various models. We envision ADOPD as a foundational dataset with the potential to drive future research in document understanding.\footnote{Correspondence to: jigu@adobe.com} 1 INTRODUCTION Document understanding has been invigorated by the introduction of large-scale document datasets (Zhong et al., 2019; Mondal et al., 2020; Cheng et al., 2023), supporting a variety of document-related tasks (Mathew et al., 2021; Mathur et al., 2023). However, document datasets still fall short compared to data resources in more established fields (Gu et al., 2018), in which advances have been so great that models and solutions can be incorporated in real-world applications. A case in point is the field of image decomposition, where progress was fueled by datasets like MSCOCO (Lin et al., 2014) and Pascal VOC (Everingham et al., 2010). Building a document page decomposition dataset of comparable quality is essential to advance document understanding research. \footnote{Project page: https://adopd2024.github.io} We construct ADOPD by addressing two important questions: (1) How do we gather document data, and what types of documents should be included in the dataset? Table 1 compares ADOPD with earlier datasets for document layout analysis (Mondal et al., 2020; Smock et al., 2022; Landeghem et al., 2023; Saad et al., 2016). Most datasets are sourced from PDFs, with limited document types. Models trained on such homogeneous data are unlikely to perform well on different types of documents, so a top priority when collecting ADOPD is to maximize the diversity of documents types in it. (2) What elements should be annotated in document images for page decomposition? Documents, with their varied forms, can be interpreted differently based on an individual’s background. Document understanding encompasses intricacies such as visuals, text, and layout. For instance, a poster with a form may visually seem like a form, yet its text could classify it as a science or education book. The complex nature of document data poses challenges in hierarchically structuring it, a critical aspect for successful vision datasets like ImageNet and MSCOCO. Meanwhile, accurately describing the content of documents is highly valuable, but it is also more challenging than natural image captioning. We explore the fundamental question: How can we obtain a reasonable taxonomy of document types? Pre-defining a fixed taxonomy solely based on human knowledge is not practical. Instead we assume an open taxonomy and make use of a data-driven taxonomy discovery method, gradually assembling the taxonomy through large-scale data exploration. Relying solely on manual annotation of document types, which requires reading and understanding the document content, is also not practical. Therefore, we leverage the powerful zero-shot capabilities of large pretrained models such as CLIP (Radford et al., 2021) and Large Language Models (LLMs) (Floridi & Chiriatti, 2020) to assist in data selection and analysis. We couple the language model with methods for out-of-distribution (OOD) detection (Gu et al., 2023) for outlier data selection, complemented by a human-in-the-loop (HITL) approach to achieve data diversity. Each ingredient in our proposed approach—LLM, OOD and HITL—is imperfect, but together they support the selection and annotation of diverse data at scale, within a reasonable budget. Fig. 1 illustrates the diverse document types in ADOPD, comprising visually and textually rich documents, posing an annotation challenge despite its advantage. For visually rich documents like posters and diagrams, entity masks capture relationships between visual elements effectively. Conversely, for text-rich documents such as letters and articles, text bounding boxes are more suitable for marking key textual elements. To accommodate both types, we segment each document into entity masks and text regions, and provide two types of descriptive labels for each document image. Fig. 2 showcases the four document page tasks: entity segmentation (Doc2Mask), text detection (Doc2Box), tagging (Doc2Tag), and captioning (Doc2Seq). In sum, ADOPD is a large-scale diverse document page decomposition and understanding dataset, designed to support future research in document domain. In this paper, we: - present ADOPD, comprehensive dataset for document page decomposition, encompassing four distinct tasks: Doc2Mask, Doc2Box, Doc2Seq, and Doc2Tag. - propose a data-driven approach for constructing document taxonomies during data collection and safeguard the ADOPD through outlier detection and human-in-the-loop. - conduct extensive experiments and analysis on ADOPD, demonstrating its effectiveness and generalization capabilities for document understanding research. --- Table 1: Comparison of document datasets. | Dataset | Year | Size | Anno (Type) | Category | |-------------|------|------|-------------|----------| | PubLayNet | 2019 | 360K | Bbox | (1) | | DocBank | 2020 | 500K | Bbox | (1) | | IIIT-AR-13K | 2020 | 13K | Bbox | (1) | | DocLayNet | 2022 | 80.9K| Bbox | (6) | | M²Doc | 2023 | 9.1K | Bbox | (7) | | ADOPD (Ours)| 2024 | 120k | Polygon, Text Bbox, Caption, Tag | (>1000) | The symbol 📄 indicates automatic annotations, 👤 represents human annotations, and 🧠 signifies LLM assistance. 📄 indicates that the document source is a digital PDF, while 📄 indicates document images. 2 RELATED WORK Document Datasets. As shown in Table 1, several recent document image datasets have been introduced. PubLayNet (Zhong et al., 2019) comprises images and annotations generated through the automated alignment of PDFs with XML formats. DocBank (Li et al., 2020b) is created using LaTeX-generated PDF files and employs an efficient weakly supervised approach for annotation. DocLayNet (Pflitzmann et al., 2022) relies on human annotation rather than automated methods. This dataset encompasses six distinct document types and encompasses a total of 11 annotation categories. M²Doc (Cheng et al., 2023) is a recently introduced dataset featuring approximately 9k modern document images, divided into seven subsets. It contains detailed annotations spanning multiple distinct categories. IIIT-AR-13K (Mondal et al., 2020) is tailored for object detection in business documents like annual reports, containing annotated pages with standard layout elements like text, headings, lists, graphics, and tables. In summary, existing large-scale document image datasets mainly focus on PDFs, unlike the varied scanned or photographed images encountered in real-world scenarios. This limited distribution of data can bias trained models. Additionally, publicly available datasets often cover only a narrow range of document layouts and categories. Document Models. The document domain has witnessed the emergence of foundational models (Li et al., 2020a; Prasad et al., 2020), driven by advancements in deep learning. Despite rapid progress in document understanding models, the scarcity of powerful models trained on high-quality, large-scale document data remains a significant challenge. Earlier document layout analysis methods (Ouwayed & Belaid, 2012; Lee et al., 2019) relied heavily on rule-based and heuristic algorithms. However, their applicability was limited to simple document types, resulting in poor generalization performance. In addition to task-driven models, researchers have proposed a range of document pretraining models (Huang et al., 2022; Li et al., 2021; Gu et al., 2021; Tang et al., 2023; Kim et al., 2022). These models are typically pretrained on the IIT-CDIP (Lewis et al., 2006) dataset and evaluated on various document benchmarks. Despite the remarkable performance of these models on benchmark datasets, it is critical to acknowledge that most current image-based document datasets are predominantly composed of a narrow range of document types, failing to capture the heterogeneity of real-world documents. Moreover, the restricted data diversity in these benchmark datasets constrains the development and evaluation of document models. 3 ADOPD DATASET Figure 3: Model-assisted data collection and annotation pipeline for ADOPD. ADOPD stands out among document datasets as it is constructed using diverse document images found on the web. Sec. 3.1 introduces document page decomposition tasks. Sec. 3.2 presents a data-driven approach to discovering document taxonomy for data collection and analysis. Sec. 3.3 employs models to assist with human annotation, addressing challenges posed by diverse data. 3.1 TASK DEFINITION Fig. 2 illustrates the document page decomposition task defined in this paper, which encompasses four subtasks: Doc2Mask, Doc2Box, Doc2Seq, and Doc2Tag. • The Doc2Mask task entails segmenting visual entities in document images in a class-agnostic manner. The “entity" in this context denotes a thing (instance) mask or a stuff mask. For example, in Fig. 1, an entity represents a meaningful and coherent region (e.g., banner, figure, logo, etc). • The Doc2Box task calls for identifying text region-of-Interest (RoI) within a document image, regardless of their specific types. The term “box” refers to text RoI (e.g., paragraphs and titles, etc). • The Doc2Seq task involves generating captions for document images, requiring the model to analyze visual elements and structured text. Given the complexity of document images, the model must effectively comprehend visual, textual, and layout information to produce detailed captions. • The Doc2Tag task is akin to image tagging, specifically multi-label image recognition, where the objective is to assign multiple semantic labels to an image. In Doc2Tag, two levels of tagging are utilized: one based on the overall image content and another on specific local regions. 3.2 Data-Driven Document Taxonomy Discovery In standard classification scenario, we deal with a given dataset denoted as \( D_{\text{full}} \), where \( X \) represents the input space, and \( Y = \{1, \ldots, K\} \) is the label space. The classification model, denoted as \( f := g \circ h \), consists of a feature extractor \( h : X \rightarrow \mathbb{R}^d \) and a classifier \( g : \mathbb{R}^d \rightarrow \mathbb{R}^K \), which maps the input’s feature embedding to \( K \) real-valued numbers called logits. In practice, establishing a guiding taxonomy associated with \( K \) is crucial for effective data collection, enabling us to manage and assess the diversity of the collected data. However, determining an appropriate value for \( K \) in documents is challenging due to the diversity of documents. We draw inspiration from pretrained models such as CLIP, GPT-4, etc., which have been trained on large-scale datasets and can serve as knowledgeable “experts” for data selection. Despite the benefits of pretrained models, the predictions from such models are not always reliable. E.g., LLMs tend to suffer from hallucination problems (Bang et al., 2023). Hence, incorporating safeguards into data collection is essential. Fig. 3 provides an overview of our data collection process, which will be detailed in the subsequent sections. Can Large-Scale pretrained Models Facilitate Data Collection? Given a document image \( x \sim D_{\text{full}} \), we can extract document information using pre-existing models as follows: \[ \{z, S_{\text{OCR}}, S_{\text{Caption}}, S_{\text{Attribute}}, S_{\text{Label}}\} = \{h(x), f_{\text{OCR}}(x), f_{\text{IT}}(x), f_{\text{Tag}}(x), f_{\text{CLIP}}(x|Y)\} \] \[ \{S_{\text{Caption}}^*, S_{\text{Tag}}^*\} = \text{LLM}(S_{\text{OCR}}, S_{\text{Caption}}, S_{\text{Attribute}}, S_{\text{Label}}|\text{Prompt}) \] where \( z \in \mathbb{R}^D \) is obtained through an image feature extractor \( h(\cdot) \). The sequence \( S_{\text{OCR}} \) consists of words and their coordinates, extracted by OCR tool \( f_{\text{OCR}}(\cdot) \). The caption \( S_{\text{Caption}} \) is generated by the captioning model \( f_{\text{IT}}(\cdot) \). Tags \( S_{\text{Attribute}} \) are produced by the image tagging model \( f_{\text{Tag}}(\cdot) \). Labels \( S_{\text{Label}} \) are generated by the CLIP model \( f_{\text{CLIP}}(\cdot|Y) \), constrained by \( Y \). Integrating multimodal information, as expressed in Eq.1, for document reasoning poses a significant challenge. As demonstrated in Eq.2, we harness the power of LLMs and formulate prompts to predict tags (\( S_{\text{Tag}}^* \)) and captions (\( S_{\text{Caption}}^* \)) for document images. The ablation study of these prompts is explored in the Appendix. How to Safeguard Data Collection? Despite the impressive zero-shot capabilities of LLMs for sequence reasoning, prediction errors and uncertainties may still arise. Some failure cases can be addressed with stricter prompts. Even so, fully relying on LLMs for data selection poses heavy risks. Fig. 4 illustrates our data selection diagram, strengthened by outlier detection. For each batch of sampled web images (\( D_{\text{selected}} \)), we define it as a mix of in-distribution (ID) (\( D_{\text{pseudo-in}} \)) and OOD (\( D_{\text{pseudo-out}} \)) data. In \( D_{\text{pseudo-in}} \), all samples belong to taxonomy classes we have already explored, while \( D_{\text{pseudo-out}} \) comprises samples from document types we haven’t explored yet. Alg. 1 outlines the process where we integrate outlier detection for data collection and taxonomy discovery. Given the dataset pool denoted as \( D_{\text{full}}^t \) and \( t \) indicates the time step, we initially select a batch of data, denote as \( D_{\text{selected-in}}^t \), from \( D_{\text{full}} \). Based on the current taxonomy \( Y^t \), we first partition \( Y^t \) into 100 clusters using the \( K \)-means algorithm (Jiang et al., 2024). Afterwards, we sample \( D_{\text{pseudo-in}}^t \) from \( D_{\text{selected-in}}^t \) based on the \( K \) clusters, corresponding to step ① in Fig. 4. Specifically, we randomly select a sub-category \( y_k^t \) from cluster \( k \) as the representative category. We then use CLIP to sample \( n_t \) documents classified under \( y_k^t \). The pseudo input ID data \( D_{\text{pseudo-in}}^t \) comprises a total of \( 100 \cdot n_t \) document images. This selection process ensures a balanced sampling operation within the current taxonomy \( Y^t \). 3Without specific indication, LLM in this paper refers to GPT-4. Sampling from ID data can lead to biased distributions because models trained on such data may silently fail when faced with OOD inputs. Therefore, enhancing the diversity of ADOPD by incorporating hard negative examples may result in an overall improvement in diversity. Therefore, we explicitly sample a OOD subset \( D_{\text{pseudo-out}}^t \) from the current candidate pool \( D_{\text{selected-in}}^t \cup D_{\text{pseudo-in}}^t \), corresponding to step (3) in Fig. 4. To obtain \( D_{\text{pseudo-out}}^t \), we employ K-means to segregate outliers from \( D_{\text{selected-in}}^t \). Specifically, we extract image features \( \mathbf{Z}^t \) for \( D_{\text{selected-in}}^{0:t-1} \cup D_{\text{selected-in}}^t \), where \( D_{\text{selected}}^{0:t-1} = D_{\text{pseudo-in}}^{0:t-1} \cup D_{\text{pseudo-out}}^{0:t-1} \). In this context, \( \mathbf{Z}^t = \mathbf{z}_k^t \), with \( k \in [0, 100) \) representing the set of K-means centroids estimated from \( D_{\text{selected}}^{0:t-1} \cup D_{\text{pseudo-in}}^t \). The outlier score is estimated as the Euclidean distance between it and the nearest centroid: \[ s^t(z) = \min_{k \in [0, 100)} ||z - \mathbf{z}_k^t||_2, \quad z \in (D_{\text{selected-in}}^t \setminus D_{\text{pseudo-in}}^t) \] where \( D_{\text{pseudo-out}}^t \) contains data points with outlier scores ranked in the top \( n_t \) across \( K \) clusters. Given the selected ID and OOD data, we have \( D_{\text{selected}}^t = D_{\text{pseudo-in}}^t \cup D_{\text{pseudo-out}}^t \), which is ready for annotation (step (4) in Fig. 4). Before annotation, we update \( Y^{t-1} \) by using the newly selected data \( D_{\text{selected}}^t \). Here, we employ the approach outlined in Eq. [2], leveraging the LLM to predict the presence of new labels and obtain the updated taxonomy \( Y^t \). We use prompt-based methods to predict document tags by considering four aspects: visual (\( P_{\text{visual}} \)), textual (\( P_{\text{textual}} \)), layout (\( P_{\text{layout}} \)), and multimodal (\( P_{\text{multimodal}} \)). Each aspect is addressed through unique input combinations in the prompt. Additional details about the prompts can be found in the Appendix. After obtaining outputs, we implement two safeguards to filter out failures. Firstly, we design a prompt-based summarizer (\( P_{\text{summary}} \)) using LLM to obtain 10 tags by summarizing the tags predicted through the four prompt strategies. Secondly, after the label generation by LLM, human annotators review and eliminate labels that are confusing or irrelevant to the document. ### 3.3 Model-Assisted Data Annotation **Data Collection.** The images in ADOPD are sourced from the Laion-HR ([Laion High Resolution](https://laion.ai/)), which comprises high-resolution web images, including multilingual document images. Laion-HR provides a foundation for our multi-lingual multi-modal ADOPD. We leverage pretrained models with humans in the loop to collect and filter data. The process includes the following steps: - **Model-Assisted Data Selection:** We first select images based on Laion-HR’s metadata by applying criteria such as pwatermark < 0.8 and punsafe < 0.5. Then, we construct a document discovery dataset using natural image datasets (e.g., ImageNet, etc.) and document datasets (e.g., DocLayNet, etc.). We then finetune a DiT-based binary image classifier ([Li et al., 2022a](https://arxiv.org/abs/2205.09676)) to identify potential documents (probability > 0.8). Subsequently, we apply an OCR tool ([Du et al., 2021](https://arxiv.org/abs/2106.05976)), and retain those with a word count exceeding 10. Although metadata provides predictions for watermarks, in order to improve accuracy, we additionally train a watermark detection model to filter watermarked images. We compute MD5 hashes and Hamming distances between images to exclude duplicates, even if document images in Laion-HR have different URLs. Fig. 10 in the Appendix shows the percentage of data selection. - **Human Selection and Sensitive Verification:** Based on our taxonomy obtained by Alg. 1, we adopt pretrained CLIP model for zero-shot tagging. Human annotators then select safe and valid images for all categories. We do not rigidly specify that images must be print-format documents, but instead suggested the annotators to choose those that resemble documents. Annotators are tasked with filtering the dataset for potentially sensitive information. --- **Algorithm 1: Data-Driven Taxonomy Discovery** **Input:** \( Y^0, D_{\text{init}}, \epsilon \) **Output:** Expanded Taxonomy \( Y \) ``` while True do ① Collect \( D_{\text{select-in}}^t \) from \( D_{\text{init}} \); ② Select \( D_{\text{pseudo-in}}^t \) from \( D_{\text{select-in}}^t \) based on \( Y^{t-1} \); ③ Generate image embeddings \( \mathbf{Z} \) for \( D_{\text{select-in}}^{0:t-1} \cup D_{\text{pseudo-in}}^t \); ④ Calculate \( s^t(z), \forall z \in (D_{\text{select-in}}^t \setminus D_{\text{pseudo-in}}^t) \); ⑤ Select outlier data \( D_{\text{pseudo-out}}^t \); foreach \( z \sim D_{\text{pseudo-in}}^t \cup D_{\text{pseudo-out}}^t \) do ⑥ Predict new labels using four prompts; ⑦ Update \( Y^t \) with the newly predicted labels; ⑧ Refine \( Y^t \) with human annotator; if \( |Y^t| > \epsilon \) then Stop; else \( t \leftarrow t + 1 \); ``` --- Note that the taxonomy \( Y \) gradually change with the growth of data collection. Data Annotation. The annotation process of ADOPD prioritizes the core principle of understanding the document’s structure and layout. We avoid imposing overly rigid constraints on annotation. - **Model-Assisted Manual Annotation**: In the early stage, we utilize a pretrained CropFormer (Qi et al., 2023) to generate pseudo entity masks. Annotators follow guidelines to adjust the masks by adding, modifying, or deleting as needed. After annotating a sufficient amount of data, CropFormer is retrained with the new annotations and serves as the seed model for data preprocessing in the subsequent stage. Through this iterative process, our model progressively reduces annotation costs while simultaneously increasing annotation efficiency. Fig. 12 in the Appendix illustrates the effectiveness of model-assisted annotation. During the annotation process, we provide document captions ($S_{\text{Caption}}^*$) and tags ($S_{\text{Tag}}^*$) to aid annotators in understanding the document. - **Multi-Task and Multi-Lingual Annotation**: ADOPD stands out from other document datasets for its multi-task and multi-lingual characteristics. Our primary focus is on English and CJK (Chinese, Japanese, Korean) documents, with 60k document images in English and the remaining in the other languages. We reserve a private test set for the competition. Each dataset has four tasks introduced in Sec. 3.1. Specifically, for Doc2Mask annotation, we refrain from imposing semantic constraints on labeling entities, therefore encouraging annotators to come up with open-ended names or descriptions that are accurate (e.g., “doc-in-doc”, “banner”, “infographic”, “natural image”, etc.). As our task focuses on document entity segmentation, we do not incorporate label information in segmentation evaluations. For Doc2Box, we have stricter rules which require annotators to comprehend words and group them according to their semantic meaning. The annotation files follow the MSCOCO annotation format. 4 EXPERIMENTS 4.1 IMPLEMENTATION DETAILS **Baseline Models.** We experiment on the subset of ADOPD, with training and validation sets comprising 50k and 10k images, respectively. (1) Doc2Mask: we evaluate two frameworks: Mask2Former (Cheng et al., 2021) and CropFormer (Qi et al., 2023), to identify which is best suited for the document page decomposition task. We perform ablation studies on these frameworks using different backbones, such as Swin Transformer (Swin) (Liu et al., 2021), Hornet (Rao et al., 2022), and ViT (Parmar et al., 2018). (2) Doc2Box: we similarly benchmark three models: Faster R-CNN (Ren et al., 2015), Deformable-DETR (Zhu et al., 2021), and Cascade Mask-RCNN (MR-CNN) (Car & Vasconcelos, 2019). We also enhance Cascade Mask-RCNN by incorporating pretrained ViT backbones, specifically DINOv1 (Caron et al., 2021) and DinoV2 (Oquab et al., 2023) with ViT-Adapter (Chen et al., 2022). (3) Doc2Seq: we build an encoder-decoder model using pretrained ViT and GPT-2 (Radford et al., 2019), fine-tuned on 80k image-caption pairs for training and 20k for validation. The captions are generated using prompts specified in Eq. 2. Acknowledging the gap between LLM-generated and human annotations, we collect an extra 5k human-annotated validation set for further comparison. (4) Doc2Tag: we validate our taxonomy discovery using the CLIP ViT-G/14 model and report the OOD performance on RVL-CDIP (Harley et al., 2015). We build Doc2Mask using the Detectron2 (Wu et al., 2019) and Doc2Box with MMDetection (Chen et al., 2019). All experiments are run on NVIDIA A100-80GB GPUs. Following standard practices (Ghiasi et al., 2021), we employ an input resolution of 1024×1024, achieved by re-scaling and padding the shorter side of the image. Doc2Mask (CropFormer and Mask2Former) and Doc2Box (Faster R-CNN, Cascade Mask-RCNN) are trained for 15 epochs with a batch size of 32 on 8 GPUs to achieve full convergence. We train Deformable-DETR for 30 epochs due to slow convergence issues. We build other models (Doc2Seq and Doc2Tag) with Huggingface Transformers framework (Wolf et al., 2020). For Doc2Seq, we train it for 50 epochs on 8 GPUs with a total batch size of 800. Finetuning CLIP ViT-G/14 on Doc2Seq data takes 100 epochs on 8x8 GPUs. **Evaluation Metrics.** We evaluate Doc2Mask and Doc2Box with the mean average recall (mAR) and mean average precision (mAP) metrics. This assessment considers ten overlap thresholds ranging from 0.5 to 0.95 in increments of 0.05 (mAP@0.5-0.95). For OOD evaluation, we use metrics including the Area Under the Receiver Operating Characteristic (AUROC), False Positive Rate at 95% Recall (FPR95), maximum concept matching (MCM) score (Ming et al., 2022), and accuracy. --- 5This research’s data collection and annotation were completed in October 2023. (ACC). For Doc2Seq, we use the BLEU@n (B@n) (Papineni et al., 2002), CIDEr (C) (Vedantam et al., 2015), METEOR (M) (Denkowski & Lavie, 2014) and ROUGE (R) (Lin, 2004) for evaluation. ### 4.2 Document Page Decomposition Tasks Analysis #### Comparing the Model Architectures. Table 2 compares Mask2Former and CropFormer models on Doc2Mask. CropFormer outperforms Mask2Former with similar backbones and pretrained datasets. CropFormer’s superiority stems from its integration of image crops alongside full image input, enhancing mask prediction with detailed information. This highlights the model’s ability to handle multi-view and local image information, especially in the context of document images. We compare various object detection models in Table 3, including Faster R-CNN, Deformable-DETR, and Cascade MR-CNN. While Deformable-DETR improves, it doesn’t outperform anchor-based detectors like Faster R-CNN and Cascade MR-CNN significantly. Despite achieving a higher mAR, the limited mAP improvement may be due to the distinct data distribution of text boxes, differing from general objects in natural images with clear classification boundaries. Meanwhile, the Cascade MR-CNN, combining Mask R-CNN and Cascade R-CNN, achieves the highest mAP. It enhances instance segmentation performance and aids text detection, especially for words requiring pixel-level feature representation. #### Comparing Backbones and Pretraining. Table 2 also investigates the impact of vision backbone pretrained on various datasets. References to EntitySeg, ImageNet, and SA-1B indicate pretraining on the respective datasets. SAM (Kirillov et al., 2023) pretrained on SA-1B outperforms the Swin/Hornet models trained on ImageNet or EntitySeg. This can be attributed to two factors: firstly, SA-1B is sufficiently large (around 1 Billion). Secondly, while Swin/Hornet architectures are well-suited for segmentation, SAM is trained with pixel-level supervised learning, enabling it to acquire improved pixel-level representations crucial for document image understanding. Table 3 compares different backbones on Doc2Box. Dinov2p14 + ViTAdapter excels with higher mAP yet slightly lower mAR, demonstrating the superiority of self-supervised backbones over pretrained alternatives. This is crucial for document analysis, given the absence of high-quality ImageNet-like pretraining data. Comparing Dinov1p8 and Dinov1p16 suggests that fine-grained patches enhance document image features. Fig. 5b illustrates the results of Doc2Box using Dinov2p14+ViTAdapter. #### Evaluating Generalization Ability. In Table 4(a), we compare the model trained on ADOPD with those fine-tuned on EntitySeg. Combined with Fig. 5a, it is evident that models fine-tuned on ADOPD can better focus on fine-grained document elements and make more reasonable predictions for document entity masks. Conversely, models pretrained on EntitySeg can predict some masks but tend to excessively detect elements present in natural images (e.g., people, objects), while neglecting the document’s inherent layout. Table 4(b) validates the cross-dataset generalization pretraining advantage of ADOPD, specifically focusing on the evaluation set of DocLayNet. For a fair comparison, we consider only text detection without categorizing the boxes. Directly applying the model fine-tuned on ADOPD to DocLayNet data yields zero-shot results with high recalls. Furthermore, fine-tuning on DocLayNet with ADOPD pretrained backbones outperforms fine-tuning with ImageNet backbones. Note that DocLayNet’s testing is limited to its limited document types and cannot reveal anything about the generalization capability of ADOPD for other taxonomy types. | Backbone | Pretrain | mAP | AP50 | AP75 | mAR | |----------|----------|-----|------|------|-----| | SwinT | EntitySeg | 31.80 | 37.16 | 32.33 | 34.0 | | | ImageNet | 28.95 | 34.36 | 29.51 | 31.0 | | SwinL | EntitySeg | 32.81 | 38.14 | 33.17 | 35.3 | | | ImageNet | 30.21 | 36.30 | 31.18 | 32.5 | | HornetL | EntitySeg | 34.39 | 40.09 | 34.95 | 36.9 | | | ImageNet | 32.96 | 38.22 | 33.30 | 35.2 | | ViTB | SA-1B | 35.59 | 41.05 | 36.35 | 37.6 | | ViTL | | 35.81 | 40.27 | 36.53 | 37.8 | | SwinT | EntitySeg | 35.46 | 41.58 | 35.60 | 38.5 | | | ImageNet | 34.73 | 41.50 | 35.20 | 41.0 | | SwinL | EntitySeg | 36.03 | 42.30 | 36.73 | 39.2 | | | ImageNet | 37.73 | 44.62 | 38.49 | 40.7 | | HornetL | EntitySeg | 35.05 | 40.00 | 35.75 | 37.6 | | | ImageNet | 36.06 | 41.84 | 36.69 | 38.7 | | ViTB | SA-1B | 35.87 | 41.92 | 36.73 | 38.4 | | ViTL | | 39.56 | 45.72 | 40.33 | 42.4 | | Method | Backbone | Box Quality | |--------|----------|-------------| | Faster R-CNN | ResNet50 | 61.1 78.9 67.0 74.9 | | | ResNet101| 61.4 78.6 67.3 74.3 | | Deformable-DETR | ResNet50 | 65.0 82.2 72.1 81.6 | | | ResNet101| 65.5 82.8 72.7 81.6 | | Cascade MR-CNN | ResNet50 | 64.7 80.9 71.0 79.4 | | | ResNet101| 65.3 71.7 68.7 79.1 | | | Dinov2p14 + ViTAdapter | 63.6 80.4 69.6 76.3 | | | Dinov1p8 + ViTAdapter | 63.2 80.3 69.5 76.2 | | | Dinov2p14 + ViTAdapter | 67.0 82.7 73.2 77.8 | Table 4: Ablation studies: (a) comparing the performance of models trained with ADOPD and models fine-tuned only on EntitySeg. The results in “(-)” represent the zero-shot outcomes for EntitySeg. (b) Cross-dataset evaluation to assess the generalizability of ADOPD on DocLayNet. (a) Results for with and without ADOPD. | Backbone | mAP | AP<sub>50</sub> | AP<sub>75</sub> | mAR | |----------|-----|-----------------|----------------|-----| | SwinT | 32.81(16.27) | 38.14(22.52) | 33.17(15.88) | 35.3(29.3) | | HornetL | 34.39(15.83) | 40.09(21.74) | 34.95(15.38) | 36.9(29.0) | (b) Cross-dataset evaluation on DocLayNet. | Method | Backbone | Zero-Shot | Finetune | |--------|----------|-----------|----------| | | | ADOPD | ImageNet | ADOPD | | Faster R-CNN | ResNet<sub>50</sub> | 0.9% | 58.5 | 43.0 | 55.5 | 44.5 | 60.4 | | | ResNet<sub>101</sub> | 1.0 | 56.6 | 46.0 | 58.5 | 47.0 | 60.7 | | Deformable-DETR | ResNet<sub>50</sub> | 2.2 | 80.4 | 74.7 | 87.2 | 75.4 | 88.9 | | | ResNet<sub>101</sub> | 2.6 | 79.0 | 75.4 | 85.9 | 77.2 | 88.1 | (a) Mask Prediction Comparison: From top to bottom, we showcase the original image and predictions from the best models trained on ADOPD, EntityV2, and SAM (SA1B), respectively. (b) Document text detection visualization results, each image paired with its caption and tags. Figure 5: Visualization of ADOPD images and results for Doc2Mask and Doc2Box. Prompt-Guided Context-Aware Captioning Benefits Vision-Language Modeling. Table 4.2 evaluates caption quality. We collect 5K test data to evaluate the effectiveness of Doc2Seq. The 🔄 represents the GPT-4 model, and BLIP<sub>Large</sub> (Li et al., 2022b) and BLIP2-OPT-2.7b (Li et al., 2023) are obtained from Huggingface model hub. ViTBase-P32-384/ViT<sub>Base-P16-384</sub>+GPT2 are fine-tuned on Doc2Seq. While GPT-4 captions achieve a commendable CIDEr score, indicating consensus, a noticeable disparity persists between them and human annotations. Models fine-tuned on Doc2Seq can attain similar performance to GPT-4 on B@n, but show significantly lower CIDEr scores. In Fig. 6(left), human-written captions are notably longer than machine-generated ones, impacting reference-based evaluation. These findings highlight the challenge in document captioning due to diverse interpretations and varying caption lengths, complicating evaluation. To verify the benefits of prompt-guided captions, we fine-tune CLIP with Doc2Seq data and conduct two experiments: zero-shot evaluation on RVL-CDIP test set and supervised training on RVL-CDIP based on finetuned CLIP vision backbone. While finetuned CLIP improves zero-shot capability for specific data (e.g., Budget and Presentation), the overall enhancement is comparable to raw pretrained CLIP. We observe from Fig. 6(right) and Table 6 that finetuning the CLIP ViT backbone, initially trained on Doc2Seq, with a classifier layer for separate training on RVL-CDIP results in a noticeable improvement. This underscores the importance of caption rewriting for handling noisy data. Figure 6: Ablation study on captions. Table 5: Ablation experiments on Doc2Seq. | Method | Test B@1 | B@2 | B@3 | B@4 | M | R | C | |-----------------|----------|-----|-----|-----|-----|-----|-----| | BLIP Large | 27.0 | 16.7| 11.7| 8.5 | 12.8| 22.4| 84.7| | BLIP2-OPT-2.7b | 12.4 | 8.5 | 6.4 | 5.0 | 9.3 | 22.6| 18.3| | VIT Base-P32-384+GPT2 | 4.3 | 3.6 | 3.0 | 2.6 | 10.7| 25.1| 16.5| | VIT Base-P16-384+GPT2 | 12.3 | 7.6 | 5.3 | 3.9 | 9.0 | 21.8| 18.0| | VIT Base-P32-384+GPT2 | 22.5 | 9.2 | 4.4 | 2.5 | 7.7 | 16.8| 9.8 | | VIT Base-P16-384+GPT2 | 16.7 | 5.8 | 2.3 | 1.0 | 5.8 | 13.9| 4.4 | | VIT Base-P32-384+GPT2 | 23.4 | 9.7 | 4.7 | 2.7 | 8.0 | 17.2| 11.0| | VIT Base-P16-384+GPT2 | 17.3 | 6.0 | 2.4 | 1.1 | 6.0 | 14.1| 5.3 | Table 6: Performance of Models for Per-Class Classification | Model | Type | Letter | Form | Email | Hw | Ad | SR | SP | SP | FF | NA | Bgt | Inv | Prsn | Qnr | Rsm | Memo | Avg | |----------------|------------|--------|------|-------|----|----|----|----|----|----|----|-----|-----|------|-----|-----|------|-----| | DrT base | Supervised | 98 | 92 | 98.76 | 99.43| 98.99| 99.64| 97.86| 99.84| 99.31| 99.52| 99.18| 99.24| 98.83| 99.76| 99.15| 99.28| 99.16| 99.18| | ViT G-14 | Zero-Shot | 98 | 52 | 98.76 | 99.78| 91.89| 93.46| 76.51| 87.64| 84.82| 32.81| 99.49| 33.07| 99.35| 54.81| 93.75| 90.46| 98.23| 99.58| | ViT G-14+ADOPD | Zero-Shot | 94 | 32 | 87.55 | 75.33| 85.79| 27.46| 80.33| 96.21| 63.46| 38.72| 98.79| 63.81| 93.93| 72.26| 84.25| 84.41| 95.02| 77.73| | DrT base | Supervised | 92 | 41 | 86.83 | 98.97| 96.13| 94.63| 87.11| 95.22| 94.90| 96.68| 92.77| 92.73| 94.07| 87.38| 90.84| 97.67| 95.14| 93.36| | ViT G-14 | Supervised | 86 | 20 | 76.70 | 96.28| 93.48| 91.81| 71.94| 91.10| 89.72| 94.97| 83.68| 81.24| 87.44| 78.26| 84.37| 92.94| 85.87| 86.57| | ViT G-14+ADOPD | Supervised | 90 | 87 | 84.48 | 96.98| 95.34| 93.76| 82.39| 93.51| 93.00| 95.61| 89.81| 89.62| 92.85| 84.29| 90.23| 96.45| 92.62| 91.38| 1 The abbreviations are: Handwritten (Hw), Advertisement (Ad), Scientific Report (SR), Scientific Publication (SP), Specification (Spec), File Folder (FF), News Article (NA), Budget (Bgt), Invoice (Inv), Presentation (Prsn), Questionnaire (Qnr), and Resume (Rsm). Data-Driven Document Taxonomy Analysis. To verify Alg. 1, we collect the ID dataset from both RVL-CDIP and Laion-HR based on the 16 classes provided in RVL-CDIP. We sample OOD categories such as “Magazine (M)”, “Comic (C)”, “Guidebook (G)”, “Yearbook (Y)”, “Worksheet (W)”, and “Open Book (OB)”, etc., from Y and collect the OOD data from Laion-HR. In the Appendix, Table 7 shows OOD detection results for two variants: predicting 16 and 50 centroids separately. The K-means method with 50 centroids excels in detecting outliers across all categories. Fig. 7 (center) displays taxonomy expansion with HITL taxonomy cleaning. We start with an initial ID set with 10 classes selected from RVL-CDIP. At every step, we sample 10 detected outlier data. As the data increases, our outlier detection method successfully retrieves outliers for the majority of novel categories. Fig. 7(left, right) illustrates the distribution of “Comic” and ID data, where “Comic” is detected as an outlier in the first step. Red color indicates the detected outlier samples. Responsible AI Analysis. During data cleaning, we conduct a comprehensive Responsible AI analysis, tackling biases in sensitive areas such as nudity, sexuality, and violence, etc. We meticulously filter sensitive data with input from 15 diverse evaluators. Fig. 8 displays their geographic distribution. If any evaluator deems an image inappropriate, we label it as sensitive. After review, we remove 9.29% of potentially sensitive images, ensuring the majority of the 120K images remain non-sensitive. This rigorous process guarantees a safer and less biased dataset, promoting fairness and inclusivity in our models. 5 CONCLUSION This paper introduces ADOPD, a large-scale dataset for document page decomposition, and outlines a systematic process including data collection, taxonomy analysis, model-assisted data annotation, and HITL processes. We conduct comprehensive analyses and detailed experimental comparisons across four tasks, demonstrating the value of ADOPD. It opens up numerous opportunities for future exploration and the development of foundational models for document understanding, aiming to catalyze advancements in document analysis. REFERENCES Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezhang Yu, Willy Chung, et al. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. In IJCNLP-AAACL, 2023. Zhaowei Cai and Nuno Vasconcelos. Cascade r-cnn: High quality object detection and instance segmentation. TPAMI, 2019. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In ICCV, 2021. Kai Chen, Jiaqi Wang, Jiangmiao Pang, Yuhang Cao, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jiarui Xu, Zheng Zhang, Dazhi Cheng, Chenchen Zhu, Tianheng Cheng, Qi jie Zhao, Buyu Li, Xin Lu, Rui Zhu, Yue Wu, Jifeng Dai, Jingdong Wang, Jianping Shi, Wanli Ouyang, Chen Change Loy, and Dahua Lin. MMDetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155, 2019. Zhe Chen, Yuchen Duan, Wenhai Wang, Junjun He, Tong Lu, Jifeng Dai, and Yu Qiao. Vision transformer adapter for dense predictions. In ICLR, 2022. Bowen Cheng, Alexander G. Schwing, and Alexander Kirillov. Per-pixel classification is not all you need for semantic segmentation. In NeurIPS, 2021. Hiuyi Cheng, Peirong Zhang, Sihang Wu, Jiaxin Zhang, Qiyuan Zhu, Zecheng Xie, Jing Li, Kai Ding, and Lianwen Jin. M6doc: A large-scale multi-format, multi-type, multi-layout, multi-language, multi-annotation category dataset for modern document layout analysis. In CVPR, 2023. CLIP ViT-G/14. Clip vit-g/14. https://huggingface.co/laion/CLIP-ViT-g-14-laion2B-s12B-b42K, 2023. Michael Denkowski and Alon Lavie. Meteor universal: Language specific translation evaluation for any target language. In WMT, 2014. Yuning Du, Chenxia Li, Ruoyu Guo, Cheng Cui, Weiwei Liu, Jun Zhou, Bin Lu, Yehua Yang, Qiwen Liu, Xiaoguang Hu, et al. Pp-ocrv2: Bag of tricks for ultra lightweight ocr system. arXiv preprint arXiv:2109.03144, 2021. Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. IJCV, 2010. Luciano Floridi and Massimo Chiriatti. Gpt-3: Its nature, scope, limits, and consequences. Minds and Machines, 30:681–694, 2020. Golnaz Ghiasi, Yin Cui, Aravind Srinivas, Rui Qian, Tsung-Yi Lin, Ekin D. Cubuk, Quoc V. Le, and Barret Zoph. Simple copy-paste is a strong data augmentation method for instance segmentation. In CVPR, 2021. Jiuxiang Gu, Zhenhua Wang, Jason Kuen, Lianyang Ma, Amir Shahroudy, Bing Shuai, Ting Liu, Xingxing Wang, Gang Wang, Jianfei Cai, et al. Recent advances in convolutional neural networks. Pattern Recognition, 2018. Jiuxiang Gu, Jason Kuen, Vlad I Morariu, Handong Zhao, Rajiv Jain, Nikolaos Barmpalios, Ani Nenkova, and Tong Sun. Unified pretraining framework for document understanding. In NeurIPS, 2021. Jiuxiang Gu, Yifei Ming, Yi Zhou, Jason Kuen, Vlad I Morariu, Handong Zhao, Ruiyi Zhang, Nikolaos Barmpalios, Anqi Liu, Yixuan Li, et al. A critical analysis of out-of-distribution detection for document understanding. In EMNLP, 2023. Adam W Harley, Alex Ufkes, and Konstantinos G Derpanis. Evaluation of deep convolutional nets for document image classification and retrieval. In ICDAR, 2015. Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, and Furu Wei. Layoutlmv3: Pre-training for document ai with unified text and image masking. In ACMMM, 2022.
X1lDOv09hG
The paper only considers the linear score estimator and derives the optimal closed-form solution. How do you know the ground truth score function is linear? For a general score function, can we still have high variance parameters?
HIGH VARIANCE SCORE FUNCTION ESTIMATES HELP DIFFUSION MODELS GENERALIZE Anonymous authors Paper under double-blind review ABSTRACT How do diffusion-based generative models generalize beyond their training set? In particular, do they perform something similar to kernel density estimation? If so, what is the kernel, and which aspects of training and sampling determine its form? We argue that a key contributor to generalization is the fact that the denoising score matching objective usually used to train diffusion models tends to obtain high variance score function estimates at early times. We investigate this claim by mathematically studying score estimation for (unconditional) diffusion models using estimators that are linear in a set of feature maps. We show that, using standard choices (e.g., for the time sampling distribution), the effect of this high variance is mathematically equivalent to adding a noise term to the probability flow ODE. Moreover, in the special case that the score is learned independently for different times, reverse diffusion is on average equivalent to convolving the training distribution with a data-dependent kernel function. 1 INTRODUCTION Despite their empirical successes, it is unclear how diffusion-based generative models (Sohl-Dickstein et al., 2015; Song & Ermon, 2019; Ho et al., 2020) are able to generalize. For example, how are image models able to strike a balance between generating images which are novel, and generating images which are like those from their training set? Generating samples involves two steps: training a model, usually using a denoising score matching objective (Vincent, 2011; Song & Ermon, 2019); and sampling from that model, which can be viewed as numerically integrating an ordinary or stochastic differential equation (ODE/SDE) (Song et al., 2021). It is at least somewhat clear where generalization probably does not come from. Although noise in the sampling process can increase sample quality—as measured by, e.g., Fréchet inception distance (FID) scores—diffusion models can achieve high sample quality without this (Karras et al., 2022). Inaccuracies in numerical integration also do not appear to be responsible for generalization, since more precise integration (e.g., by using smaller time steps, or a more sophisticated numerical integration scheme) generally improves sample quality (Liu et al., 2022). Generalization also does not appear to be due to perfectly optimizing the denoising score matching objective, since the optimal solution is the score function of the training distribution (Vincent, 2011). In particular, since models are trained using a finite number of examples, the optimal score function is that of a mixture of delta functions centered at the training data. Sampling using such a score function would only ever yield one of the training examples, rather than a novel sample (see Appendix A for a quick review of these points). Finally, although function approximators like neural networks exhibit interesting inductive biases in what they readily learn from training data (Bordelon et al., 2020; Canatar et al., 2021), the architectures supporting large models like Stable Diffusion (Rombach et al., 2022) are flexible enough to in principle learn something extremely close to the optimal score. Although the inductive biases of neural networks are probably part of the story, it is unlikely that these types of inductive biases alone are responsible for generalization. Where, then, might the ability to generalize come from? In this paper, we examine the possibility that generalization ability arises at least in part from using an objective function whose optimization typically produces high variance estimates of the score function. We mathematically show that this high variance effectively contributes a (generally state- and time-dependent) noise term to the probability flow ODE. Moreover, the form of the kernel that appears in this noise term appears to have properties that support generalization, apparently by implementing an inductive bias about feature variance. 2 MATHEMATICAL FORMULATION Diffusion models. Our mathematical formulation of diffusion models will be similar to that of Song et al. [2021]. Training data from a distribution \( p(x_0) \) on \( \mathbb{R}^D \) is corrupted by a forward process, producing a distribution of corrupted data \( p(x_t) := \int p(x_t | x_0) p(x_0) \, dx_0 \). Data can be ‘denoised’ using a probability flow ODE involving the score function \( s(x_t, t) := \nabla_{x_t} \log p(x_t); \) concretely, \[ \dot{x}_t = -\beta_t x_t + g_t \eta_t \quad \text{(forward process, integrate from } t = 0 \text{ to } t = t_{\text{max}}) \tag{1} \] \[ \dot{x}_t = -\beta_t x_t - \frac{1}{2} g_t^2 s(x_t, t) \quad \text{(reverse process, integrate from } t = t_{\text{max}} \text{ to } t = 0) \tag{2} \] where \( \eta_t \in \mathbb{R}^D \) is Gaussian white noise, and both \( \beta_t > 0 \) and \( g_t > 0 \) are smooth functions of \( t \in [0, t_{\text{max}}] \). The forward process’ marginals are \( p(x_t | x_0) = N(x_t; \alpha_t x_0, \sigma_t^2 I) \), where \[ \alpha_t := e^{-\int_0^t \beta_s \, ds} \quad \sigma_t^2 := \int_0^t g_s^2 \alpha_s^2 \, ds. \tag{3} \] One sometimes assumes specific relationships between the functions above; for example, the variance-preserving SDE (VP SDE) assumes \( \beta_t = g_t^2 / 2 \), so that \( \alpha_t^2 + \sigma_t^2 = 1 \) for all times \( t \). In what follows, we will make no such assumptions. Denoising score matching. The naive approach to learning a score estimator \( \hat{s}_\theta(x_t, t) \) might use \[ J_0(\theta) := \frac{1}{2} \mathbb{E}_{t,x_t|t} \left\{ \| \hat{s}_\theta(x_t, t) - s(x_t, t) \|_2^2 \right\} = \frac{1}{2} \int \lambda(t) \| \hat{s}_\theta(x_t, t) - s(x_t, t) \|_2^2 p(x_t) \, dx_t dt \tag{4} \] where \( \lambda(t) \) is a time sampling distribution on \([0, t_{\text{max}}]\). In practice, only samples from \( p(x_0) \) are available, so it can be difficult to estimate \( s(x_t, t) \); we can avoid this via the denoising score matching (DSM) objective [Vincent, 2011; Song & Ermon, 2019] \[ J_1(\theta) := \frac{1}{2} \mathbb{E}_{t,x_0,x_t|t} \left\{ \| \hat{s}_\theta(x_t, t) - s(x_t, t; x_0) \|_2^2 \right\} = \frac{1}{2} \int \lambda(t) \| \hat{s}_\theta(x_t, t) - \nabla_{x_t} \log p(x_t | x_0) \|_2^2 p(x_t | x_0) p(x_0) \, dx_t dx_0 dt. \tag{5} \] Both the naive and DSM objectives have the same optima (see Appendix A); however, before an optimal solution is reached, the variance of the score function estimates obtained using each objective are substantially different. Heuristically, this is because the proxy score function target has a large variance—a singular variance, in fact—for small times \( \Delta t \): \[ \text{Cov}_{x_t} [\nabla_{x_t} \log p(x_t | x_0)] = \frac{1}{\sigma_t^2} I \xrightarrow{t \to 0} \frac{1}{g_t^2 \Delta t} I. \tag{6} \] This means that the score function is typically being fit to close-to-random noise at small times \( t \). Meanwhile, it is clear that it is relatively easy to estimate the score function at large times, since it is increasingly true that \( p(x_t) \approx p(x_t | x_0) \) as \( t \to \infty \). We are not the first to identify this issue [Nguyen et al., 2017; Dhariwal & Nichol, 2021]. For example, [Chao et al., 2022] call this the score mismatch issue and propose a particular method for mitigating it. We take a different perspective here; we view this property not as a bug, but as a feature that may help diffusion models generalize. Time sampling and variance normalization. In order to mitigate the large variance issue, practitioners usually do two things: (i) choose a special time sampling distribution \( \lambda(t) \), and (ii) assume a score estimator with a certain \( \sigma_t \)-dependent prefactor. In particular, usually the distribution \[ \lambda_\star(t) = \frac{\sigma_t^2}{\int_0^{t_{\text{max}}} \sigma_s^2 \, ds} = \frac{\sigma_t^2}{Z_\sigma} \tag{7} \] is used [Song et al., 2021; Karras et al., 2022], and we often take \( \hat{s}_\theta(x_t, t) = \epsilon_\theta(x_t, t) / \sigma_t \). Then \[ J_1(\theta) = \frac{1}{2Z_\sigma} \int \| \epsilon_\theta(\alpha_t x_0 + \sigma_t \epsilon, t) - \epsilon \|_2^2 N(\epsilon; 0, I) p(x_0) \, d\epsilon dx_0 dt. \tag{8} \] These choices are made by Stable Diffusion [Rombach et al., 2022], and similar choices are recommended by [Karras et al., 2022]. 3 GENERALIZATION AND SAMPLING VARIANCE: INTUITION In supervised learning settings, “generalization” usually means predicting the value of a function on unseen inputs. It is critical to note that we mean something different when we refer to the ability of diffusion models to generalize. Real training data typically consists of $M \geq 1$ examples (e.g., images), which together define a mixture distribution: $$p(x_0) = \frac{1}{M} \sum_{m=1}^{M} \delta(x_0 - \mu_m)$$ $$p(x_t) = \frac{1}{M} \sum_{m=1}^{M} N(x_t; \alpha_t \mu_m, \sigma_t^2 I)$$ (9) where $\delta$ is the Dirac delta function. For such a distribution, we can straightforwardly compute that $$s(x_t, t) = \frac{1}{Mp(x_t)} \sum_{m=1}^{M} \left( \frac{\alpha_t \mu_m - x_t}{\sigma_t^2} \right) N(x_t; \alpha_t \mu_m, \sigma_t^2 I).$$ (10) What we mean by “generalization” is that our score estimator learns something different than this score function. Reverse diffusion with this score—the ‘empirical’ score—produces a sample from $p(x_0)$, i.e., one of the $M$ training examples. What we would like instead is to generate samples similar to, but somewhat different from, those training examples. How might this be possible? Given a large number of samples $(t^{(k)}, x_0^{(k)}, x_t^{(k)}) \sim \lambda(t)p(x_0)p(x_t|x_0)$, we expect that a sufficiently expressive score estimator trained using a procedure like DSM is unbiased (since its optimum is the true score), i.e., that $\mathbb{E}[s_\theta(x_t, t)] = s(x_t, t)$, where the expectation is taken over sample realizations. The distribution learned by the diffusion model (obtained by reverse diffusion using the score estimator) can be written as $q(x_0|\theta) = \int q(x_0|x_T, \theta)p(x_T) dx_T$, where $p(x_T)$ is the distribution of the initial sample, and $q(x_0|x_T, \theta)$ describes how that sample changes due to reverse diffusion. The function $q(x_0|x_T, \theta)$ has a path integral representation (see Appendix B): $$q(x_0|x_T, \theta) = \int D[p(t)]D[x(t)] \exp \left\{ \int_0^{t_{max}} ip(t) \cdot \left[ \dot{x}(t) + \beta_t x_t + \frac{1}{2} g_t^2 s_\theta(x_t, t) \right] dt \right\}. $$ (11) If we take an expectation over sample realizations and parameter initializations, and hence compute the distribution ‘typically’ learned by an unbiased diffusion model, we obtain $$\mathbb{E}[q(x_0|x_T, \theta)] = \int D[p(t)]D[x(t)] \exp \{ M_1 + M_2 + \cdots \}$$ $$M_1 := \int_0^{t_{max}} ip(t) \cdot \left[ \dot{x}(t) + \beta_t x_t + \frac{1}{2} g_t^2 s(x_t, t) \right] dt$$ $$M_2 := -\frac{1}{2} \int_0^{t_{max}} \int_0^{t_{max}} \left( \frac{g_t^2}{2} \cdot \frac{g_{t'}^2}{2} \right) p(t)^T \text{Cov}(s_\theta(x_t, t), s_\theta(x_{t'}, t')) p(t') dt dt'.$$ The first term, $M_1$, is a ‘mean’ term which by itself corresponds to integrating the probability flow ODE. The second term, $M_2$, is a ‘variance’ term, which together with $M_1$ represents SDE dynamics with noise which is generically correlated across different states and times. One way for models to generalize is if $M_2$ is not negligible, since the added variance effectively produces a ‘smear out’ version of the training distribution. Of course, we prefer particular smearings over others; for example, instead of smearing out each training data point independently of the others, we might prefer that the regions between data points receive additional probability. But since constructing the score estimator involves averaging $N \gg 1$ samples, its covariance typically goes like $1/N$. In order for generalization to occur (in the absence of other effects, like early stopping) in the large $N$ limit, we need the covariance matrix to be $O(1)$, and hence somewhat singular. Our claim is that this is happens for the DSM objective when certain choices are made, but not for the naive objective. Intuitively, this is due to the high variance of the score target used. In the next section, we will make this intuition mathematically precise. 4 MAIN THEORETICAL RESULTS We are now ready to state our main theoretical results. For reasons of mathematical tractability, we consider score estimator that is linear in features (but not necessarily in \( x_t \) or \( t! \)) \[ \hat{s}_\theta(x_t, t) = \frac{1}{\sigma_t} [w_0 + W \phi(x_t, t)] \] where the feature maps \( \phi = (\phi_1, ..., \phi_F)^T \) are smooth functions from \( \mathbb{R}^D \times [0, t_{\text{max}}] \) to \( \mathbb{R} \) that are square-integrable with respect to \( p(x_t | x_0)p(x_0)\lambda(t)/\sigma_t^2 \) and \( p(x_t | x_0)p(x_0) \) for all \( t \). The parameters to be estimated are \( \theta := \{w_0, W\} \), with \( w_0 \in \mathbb{R}^D \) and \( W \in \mathbb{R}^{D \times F} \). We may abuse notation and write \( \hat{s}_\theta = W \phi \), defining \( W_i := w_{0,i} \) and \( \phi_0 := 1 \) to absorb the constant term. Denote the distribution of reverse diffusion outputs (see Eq. 11) by \( q(x_0 | x_T, \theta) \), the optimal parameters by \( \theta^* = \{w_0^*, W^*\} \), the optimal score estimator by \( s^*(x_t, t) \), and the result of reverse diffusion using the optimal estimator by \( q_s(x_0) \). What is the distribution \( q \) ‘typically’ learned? In order to state our main result, it is useful to define the kernel matrices \[ \bar{K}_{ij} := \mathbb{E}_{t,x_t | t} [\phi_i(x_t, t)\phi_j(x_t, t)/\sigma_t^2] \] \[ \bar{K}_{ij}(0) := \mathbb{E}_{x_0} [\phi_i(x_0, 0)\phi_j(x_0, 0)] . \] **Theorem 1 (Linear score estimators trained via DSM asymptotically generalize)** Suppose the parameters of a linear score estimator (Eq. 12) are optimized according to the DSM objective (Eq. 5) using \( N \) independent samples from \( \lambda(t)p(x_0)p(x_t | x_0) \) (see Eq. 7). Consider the result of reverse diffusion using this estimator by Euler-integrating the probability flow ODE (Eq. 7) with a small time step \( \Delta t \). If \( N \to \infty \) and \( \Delta t \to 0 \) with \( N \Delta t = c \gg 1 \) held constant, then sampling from \( \mathbb{E}[q(x_0 | x_T, \theta)] \) is approximately equivalent to simulating the backwards-time (Ito-interpreted) SDE \[ \dot{x}_t = -\beta_t x_t - \frac{1}{2} g_t^2 s^*(x_t, t) + \xi(x_t, t) \] from \( t = t_{\text{max}} \) to \( t = 0 \) with initial condition \( x(t_{\text{max}}) = x_T \). The noise term \( \xi(x_t, t) \) has mean zero, and is generically correlated across different states and times according to \[ \text{Cov}_{t,t',x_t | t,x_{t'} | t'} [\xi_i(x_t, t), \xi_j(x_{t'}, t')] = V_{ij}(x_t, t, x_{t'}, t') \] where we define the \( D \times D \) “V kernel” \( V \) via \[ V_{ij} := \frac{\delta_{ij}}{g_0^2 Z_\sigma c} \left( \frac{g_t^2}{2\sigma_t} , \frac{g_{t'}^2}{2\sigma_{t'}} \right) \phi(x_t, t)^T \bar{K}^{-1}(0) \bar{K}^{-1}(0) \phi(x_{t'}, t') . \] See Appendix C for the proof. One important corollary follows from the details of the argument: **Corollary 1.1 (Linear score estimators trained via naive objective do not generalize)** Consider the situation described in Theorem 1 but assume parameters are instead optimized according to the naive objective (Eq. 4). We instead have \( \mathbb{E}[q(x_0 | x_T, \theta)] = q_s(x_0 | x_T) \). The argument also provides insight about the requirements for generalization given DSM training: **Corollary 1.2 (Generalization requires time sampling distribution to undersample small times)** Consider the situation described in Theorem 1 but assume the sampling distribution \( \lambda(t) \) used has \[ \lim_{\Delta t \to 0} \frac{\lambda(\Delta t)}{\Delta t} = 0 . \] We instead have \( \mathbb{E}[q(x_0 | x_T, \theta)] = q_s(x_0 | x_T) \). 4.1 SPECIAL CASES Some special cases can be worked out. Of particular interest is the case of gradient-descent-trained neural networks in the neural tangent kernel (NTK) regime (Jacot et al., 2018; Bietti & Mairal, 2019), where learning is ‘lazy’ (Chizat et al., 2019) in the sense that weights do not move much from their initial values. Assume such a neural network parameterizes \( \epsilon_\theta(x_t, t) = \sigma_t s_\theta(x_t, t) \). For such networks, since \[ \epsilon(x_t, t; \theta) \approx \epsilon(x_t, t; \theta_0) + \frac{\partial \epsilon(x_t, t; \theta_0)}{\partial \theta_0} (\theta - \theta_0), \] i.e., the learned parameters \( \theta \) are not far from the initial parameters \( \theta_0 \), we are in the linear regime described by Eq. (12) and Theorem 1 holds. **Corollary 1.3 (NTK regime neural networks trained via DSM asymptotically generalize)** Consider the situation described in Theorem 1 except that \( \epsilon_\theta(x_t, t) \) is a gradient-descent-trained neural network in the NTK regime. Then the conclusion of Theorem 1 still holds. (We assume normally distributed initial weights with the typical layer width scaling. The infinite width limit must be taken before the \( N \to \infty \) limit.) The feature maps are usually difficult to write down explicitly (and in this context, it is more convenient to work with them than the NTK), but there are methods to construct them (see, e.g., Bietti & Mairal (2019)). Another case of interest is what we discussed in the previous section: when the training distribution consists of \( M \) examples, so that \( p(x_0) \) is a mixture of delta functions. Near \( t = 0 \), the score function produces an infinitely strong ‘attractive force’ towards one of the examples, so one may not expect the additional variance we have discussed to be much help. But it turns out that reproducing training data is avoided since the V kernel is also singular near \( t = 0 \); this effectively adds a ‘convolution’ step to the end of reverse diffusion. **Corollary 1.4 (Mixture training set produces original distribution convolved with Gaussian kernel)** Consider the situation described in Theorem 1 except that \( p(x_0) \) is a mixture of \( M \) delta functions centered on training data \( \{ \mu_m \}_{m=1,\ldots,M} \). We have that \[ \mathbb{E}[q(x_0|\theta)] = \sum_{m=1}^{M} w_m N(x_0; \mu_m, V(\mu_m)) \] where \( w_1 + \cdots + w_M = 1 \), but the weights are not necessarily the same as those of the original training distribution. In this case, the V kernel has a special form: \[ V(y) := \frac{1}{4cZ_\sigma} \left[ 1 + (\phi(y, 0) - \mu_\phi)^T \Sigma_\phi^{-1} (\phi(y, 0) - \mu_\phi) \right] I \] \[ \mu_\phi := \mathbb{E}_{x_0} [\phi(x_0, 0)] \] \[ \Sigma_\phi := \mathbb{E}_{x_0} [\phi(x_0, 0)\phi(x_0, 0)^T] - \mu_\phi \mu_\phi^T \] Finally, a somewhat artificial but simple special case assumes that the parameters of the score function for different times are learned independently, e.g., via a sample-splitting scheme. In this case, \( \mathbb{E}[q(x_0|\theta)] \) is equivalent to running reverse diffusion with the optimal score, and then convolving the result with the \((t = 0)\) V kernel. If the empirical score is learned perfectly, this has the effect of ‘smearing out’ the original data distribution. 5 INTERPRETING THE V KERNEL A priori, it is unclear how to think about the potential utility—if any—of the V kernel. In this section, we will attempt to build intuition about it by examining several of its properties. We will focus on the special case that score function parameters for different times are learned independently. 5.1 EFFECT ON KERNEL ON MEAN AND VARIANCE Let \( \mu_* \) and \( \Sigma_* \) denote the mean and covariance of the optimal distribution \( q_* \). A typical learned distribution \( q \) has the same mean as \( q_* \), since \[ \int x_0 N(x_0; y, V(y)) q_*(y) \, dx_0 dy = \int y q_*(y) \, dy = \mu_* . \] (21) On the other hand, it will have more variance than \( q_* \), since \[ \int x_0 x_0^T N(x_0; y, V(y)) q_*(y) \, dx_0 dy \\ = \int \left\{ yy^T + \frac{Z_\sigma}{4c} \left[ 1 + (\phi(y) - \mu_\phi)^T \Sigma_\phi^{-1} (\phi(y) - \mu_\phi) \right] I \right\} q_*(y) \, dy \\ = \Sigma_* + \mu_* \mu_*^T + \frac{Z_\sigma}{4c} (1 + D) I . \] (22) 5.2 EXAMPLE: LINEAR FEATURES Suppose that the score is estimated using linear features, i.e., \( \phi = (1, -x_1, ..., -x_D)^T \) so that \( F = D \). The optimal distribution is \( q_* = N(\mu, \Sigma) \), where \( \mu \) and \( \Sigma \) denote the sample mean and covariance. The V kernel is \[ V(y) = \frac{Z_\sigma}{4c} \left[ 1 + (y - \mu)^T \Sigma^{-1} (y - \mu) \right] I . \] (23) Note that the \( y \)-dependent term is small if \( y \) is close to the mean, and large if it is at least one standard deviation away from it along some state space direction. It is worth noting that convolving \( q_* \) with the V kernel does not generally produce another distribution from the same family; in this case, a typical \( q \) is not Gaussian. One way to see this is via its characteristic function: \[ \psi(u) = \int e^{iu \cdot x_0} N(x_0; y, V(y)) q_*(y) \, dx_0 dy \\ = \int e^{iu \cdot y - \frac{u \cdot u}{2} \left[ 1 + (y - \mu)^T \Sigma^{-1} (y - \mu) \right]} q_*(y) \, dy \\ = \frac{1}{\left[ 1 + \frac{Z_\sigma}{4c} u \cdot u \right]^{D/2}} \exp \left\{ iu \cdot \mu - \frac{u^T \Sigma u}{2} \frac{1}{1 + \frac{Z_\sigma}{4c} u \cdot u} \right\} . \] (24) In particular, the log-characteristic function is not quadratic, but involves higher order terms that depend on powers of \( 1/c \). 5.3 EXAMPLE: ORTHOGONAL FEATURES Assume that the estimator uses features which are orthogonal with respect to the data, in the sense that the covariance matrix \( \Sigma_\phi \) is diagonal. Then the V kernel involves a sum of squared feature norms weighted by feature variances. This suggests the same intuition as for the previous normal distribution example: a large amount of noise is added to regions of state space where features are far from their ‘typical’ values, and a small amount of noise is added to regions of state space where features are typical. An interesting special case is when features correspond to non-overlapping bins with the same amplitude (1, say). In that case, the diagonal entries of the feature covariance matrix are proportional to $$\frac{1 - p_i}{p_i},$$ where $p_i$ is the probability in the training data captured by the $i$-th bin. More noise is added where bins capture less of the overall probability. In the case where each data point is associated with exactly one bin, the same amount of noise is added to each of those points. 5.4 Example: Gaussian Mixture Features A Gaussian mixture with $M$ mixture components has $$p(x_0) = \sum_{m=1}^{M} w_m N(x_0; \mu_m, \Sigma_m)$$ $$s(x_0) = \frac{w_m \Sigma_m^{-1} (\mu_m - x_0) N(x_0; \mu_m, \Sigma_m)}{\sum_{r=1}^{M} w_r N(x_0; \mu_r, \Sigma_r)}. \quad (26)$$ Attempting to estimate all parameters of a Gaussian mixture yields a problem not linear in the sense we consider in this paper; however, one can study an analogous linear problem by defining features $$\phi_m(x) = \frac{(\mu_m - x)^T \Sigma_m^{-1} (\mu_m - x)}{\sigma^2} \frac{N(x; \mu_m, \sigma^2 I)}{\sum_r N(x; \mu_r, \sigma^2 I)} \quad (27)$$ and fitting the score estimator $$\hat{s}_\theta(x_t, t) = W_0(t) + \sum_{m=1}^{M} W_m(t) \phi_m(x_t) \quad (28)$$ to a ground truth distribution $p(x_0) = \frac{1}{M} \sum_{m=1}^{M} N(x_0; \mu_m, \sigma^2 I)$. In this case, the matrix $\Sigma_\phi$ whose inverse appears in the V kernel has a special meaning; since $\phi_m(x_0) = -\nabla_{\mu_m} \log p(x_0)$, the $\phi$ covariance matrix is precisely the Fisher information matrix associated with the ground truth distribution. Given that the Fisher information matrix fundamentally bounds how well score function parameters can be estimated, the V kernel convolution appears to apply additional ‘smearing’ to observations in regions of state space where the score function is insensitive to small changes in its parameters, and less ‘smearing’ in regions where it is highly sensitive to small parameter changes. 5.5 General Role of the V Kernel In general, it may be useful to think of the V kernel as something which implements particular inductive biases which may be useful for generalization. One is the bias that features tend to take typical values, and that data for which this is not true should be considered less reliable. Another is that the structure of the feature space used by the estimator somehow reflects the true data distribution; for example, assuming non-overlapping bins yields a different kind of kernel (where different points are treated identically) than assuming overlapping ones (where points can affect one another). Allowing data points to interact by having at least one feature take a non-negligible value on both may be important for, e.g., interpolation. 6 DISCUSSION AND CONCLUSION We were able to show mathematically that, contrary to what one might expect, there is a sense in which diffusion models tend to learn something other than the optimum of the objective they are trained on; moreover, we were able to show that the learned distribution is mathematically equivalent to convolving the optimal distribution with a particular kernel function. The defining property of the kernel function is that it adds a large amount of noise to data in regions where at least one feature direction is somewhat larger than expected, and a small amount of noise in regions where features take more typical values (relative to their variances). From the perspective of constructing something like a kernel density estimate that generalizes well, this makes some intuitive sense. We ought to convolve data points with a kernel that corresponds to what we would expect to sample from that region of state space, if we were to sample from it again. Regions far from the mean are low in probability, and observations there should be ‘smeared out’ substantially; meanwhile, regions close to the mean are more reliable, and should not be. There are a number of limitations of the current work related to scope. (i) Our calculation assumes an estimator that is linear in its features; (ii) we only consider unconditional models; (iii) no attention mechanism is included; (iv) and we do not consider learning dynamics. Nonetheless, it is our hope that this calculation provides a foundation for others to rigorously understand the inductive biases of diffusion models. REFERENCES Alberto Bietti and Julien Mairal. On the inductive bias of neural tangent kernels. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper_files/paper/2019/file/c4ef9c39b300931b69a36fb3dbb8d60e-Paper.pdf. Blake Bordelon, Abdulkadir Canatar, and Cengiz Pehlevan. Spectrum dependent learning curves in kernel regression and wide neural networks. In Hal Daumé III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 1024–1034. PMLR, 13–18 Jul 2020. URL https://proceedings.mlr.press/v119/bordelon20a.html. Abdulkadir Canatar, Blake Bordelon, and Cengiz Pehlevan. Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks. Nature Communications, 12(1):2914, May 2021. ISSN 2041-1723. doi: 10.1038/s41467-021-23103-1. URL https://doi.org/10.1038/s41467-021-23103-1. Chen-Hao Chao, Wei-Fang Sun, Bo-Wun Cheng, Yi-Chen Lo, Chia-Che Chang, Yu-Lun Liu, Yu-Lin Chang, Chia-Ping Chen, and Chun-Yi Lee. Denoising likelihood score matching for conditional score-based data generation. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=LcF-EEt8cCC. Lénaïc Chizat, Edouard Oyallon, and Francis Bach. On lazy training in differentiable programming. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper_files/paper/2019/file/ae614c557843b1df326cb29c57225459-Paper.pdf. Prafulla Dhariwal and Alexander Quinn Nichol. Diffusion models beat GANs on image synthesis. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021. URL https://openreview.net/forum?id=AAWuCvzaVt. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840–6851, 2020. Arthur Jacot, Franck Gabriel, and Clement Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper_files/paper/2018/file/5a4be1fa34e62bb8a6ec6b91d2462f5a-Paper.pdf. Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=k7FuTOWMOC7. Luping Liu, Yi Ren, Zhijie Lin, and Zhou Zhao. Pseudo numerical methods for diffusion models on manifolds. arXiv preprint arXiv:2202.09778, 2022. Anh Nguyen, Jeff Clune, Yoshua Bengio, Alexey Dosovitskiy, and Jason Yosinski. Plug & play generative networks: Conditional iterative generation of images in latent space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684–10695, 2022. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In Francis Bach and David Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/3001ef257407d5a371a96dcd947c7d93-Paper.pdf Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=PxTIGl2RRHS Pascal Vincent. A connection between score matching and denoising autoencoders. Neural Computation, 23(7):1661–1674, 2011. doi: 10.1162/NECO_a_00142.
nW0sCc3LLN
Similarly, other works on generative MI have previously shown the opposite results to what you share here [3] - when such mitigation (in that case split learning) is present, it is actually beneficial for the attacker to use a smaller section of the model (due to the ease of reconstruction).
Model Inversion Robustness: Can Transfer Learning Help? Anonymous authors Paper under double-blind review Abstract Model Inversion (MI) attacks aim to reconstruct private training data by abusing access to machine learning models. Contemporary MI attacks have achieved impressive attack performance posing serious threats to privacy. Meanwhile, all existing MI defense methods rely on regularization that has direct conflict with the training objective, resulting in noticeable degradation in model utility. In this work, we take a different perspective, and propose a novel and simple method based on transfer learning (TL) to render MI-robust models. Particularly, by leveraging TL, we limit the number of layers encoding sensitive information from private training dataset, thereby degrading the performance of MI attack. We conduct an analysis using Fisher Information to justify our method. Our defense is remarkably simple to implement. Without bells and whistles, we show in extensive experiments that our method achieves state-of-the-art (SOTA) MI robustness. Our code, pre-trained models, demo and inverted data are included in Appx. 1 Introduction Model Inversion (MI) attack is a type of privacy threat that aim to reconstruct private training data by exploiting access to machine learning models. State-of-the-art (SOTA) MI attacks (Zhang et al., 2020; Chen et al., 2021; Wang et al., 2021a; Nguyen et al., 2023) have demonstrated increased sophistication and effectiveness, achieving attack performance of over 90% in face recognition benchmarks. The implications of this vulnerability are particularly concerning in security-critical applications (Meng et al., 2021; Guo et al., 2020; Huang et al., 2020; Schroff et al., 2015; Dufumier et al., 2021; Yang et al., 2022; Dippel et al., 2021; Chang et al., 2020; Krishna et al., 2019). The aim of our work is to propose new perspective to defend against MI attacks and to improve MI robustness. In particular, MI robustness pertains to the tradeoff between MI attack accuracy and model utility. MI robustness involves two critical considerations: Firstly, a MI robust model should demonstrate a significant reduction in MI attack accuracy, making it difficult for adversaries to reconstruct private training samples. Secondly, while defending against MI attacks, the natural accuracy of a MI robust model should remain competitive. A model with improved MI robustness ensures that it is resilient to MI while maintaining its utility. Research gap. Despite the growing threat arising from SOTA MI, there are limited studies on defending against MI attacks and improving MI robustness. Conventionally, differential privacy (DP) is used for ensuring the privacy of individuals in datasets. However, DP has been shown to be ineffective against MI (Fredrikson et al., 2014; Zhang et al., 2020; Wang et al., 2021b). Meanwhile, a few MI defense methods have been proposed. Particularly, all existing SOTA MI defense methods are based on the idea of dependency minimization regularization (Wang et al., 2021b; Peng et al., 2022): they introduce additional regularization into the training objective, with the goal of minimizing the dependency between input and output/latent representation. The underlying idea of these works is to reduce correlation between input and output/latent, which MI attacks exploit during the inversion. However, reducing correlation between input and output/latent directly undermines accuracy of the model, resulting in considerable degradation in model utility (Wang et al., 2021b). To partially restore the model utility, BiDO (Peng et al., 2022) proposes to further introduce another regularization to compensate for the reduced correlation between input and latent. However, with two additional regularization along with the original training objective, BiDO requires significant effort in Figure 1: (I) Our proposed MI defense (Sec. 3). Based on standard TL framework with pre-training (on public dataset) followed by fine-tuning (on private dataset), we propose a simple and highly-effective method to defend against MI attacks. Our idea is to limit fine-tuning with private dataset to a specific number of layers, thereby limiting the encoding of private information to these layers only (pink). Specifically, we propose to perform fine-tuning only on the last several layers. (II) Analysis of layer importance for classification task and MI task (Sec. 4.2). For the first time, we analyze importance of target model layers for MI. For a model trained with conventional training, we apply FI and find that the first few layers of the model are important for MI. Meanwhile, FI analysis suggests that last several layers are important for a specific classification task, consistent with TL literature [Yosinski et al., 2014]. This supports our hypothesis that preventing the fine-tuning of the first few layers on private dataset could degrade MI significantly, while such impact for classification could be small. Overall, this leads to improved MI robustness. (III) Empirical validation (Sec. 4.3). The sub-figures clearly show that at the same natural accuracy, lower MI attack accuracy can be achieved by reducing the number of parameters fine-tuned with private dataset. (IV) Comparison with SOTA MI Defense (Sec. 4.4). Without bells and whistles, our method achieves SOTA in MI robustness. Visual quality of MI-reconstructed images from our model is inferior. User study confirms this finding. Extensive experiments can be found in Sec. 4.5. Best viewed in color with zooming in. hyperparameter tuning based on intensive grid search [Peng et al., 2022], and is sensitive to small changes in hyperparameters (see our analysis Appx. C). In this paper, our main hypothesis is that a model with lesser parameters encoding sensitive information from private training dataset ($D_{priv}$) could achieve better MI robustness. Based on that, we propose a novel transfer learning (TL) perspective to defend against MI attacks (Fig. 1). Leveraging on standard two-stages TL framework [Pan & Yang, 2010; Yosinski et al., 2014], with pre-training on public dataset as the first stage and fine-tuning on private dataset as the second stage, we propose to limit private dataset fine-tuning only on a specific number of layers. Specifically, in the second stage, we perform private dataset fine-tuning only on the last several layers of the model. The first few layers are frozen during the second stage, preventing private information encoded in these layers. We hypothesize that by reducing the number of parameters fine-tuned with private dataset, we could reduce the amount of private information encoded in the model, making it more difficult for adversaries to reconstruct private training data. To justify our design, we conduct, for the first time, an analysis of model layer importance for the MI task. We propose to apply Fisher Information (FI) to quantify importance of individual layers for MI (Kirkpatrick et al., 2016; Li et al., 2020). Our analysis suggests that first few layers are important for MI. Therefore, by preventing private information encoded in the first few layers as in our proposed method, we could degrade MI significantly. Meanwhile, during pre-training, the first few layers learn low level information (edges, colour blobs). It is known that low level information is generalizable across datasets (Yosinski et al., 2014). Therefore, our proposed method has only small degrade in model utility. Overall, our proposed TL-based defense could achieve SOTA MI robustness. We remark that our method is very easy to implement. In our experiments, we apply our method to a range of models (CNN, vision transformers), see Sec. 4.5. On the contrary, BiDO has been applied to only VGG16 and ResNet-34 (Peng et al., 2022). Our contributions are: • We propose a simple and highly effective MI defense based on TL. Our idea is a novel and major departure from existing MI defense based on dependency minimization regularization. Furthermore, while majority of TL work focuses on improving model accuracy (Pan & Yang, 2010; Jiang et al., 2022), our work focuses on degrading MI attack accuracy via TL. • We conduct the first study to analyze layer importance for MI task via Fisher Information. Our analysis results suggest that the first few layers are important for MI, justifying our design to prevent private information encoded in the first few layers. • We conduct empirical analysis to validate that lower MI attack accuracy can be achieved by reducing the number of parameters fine-tuned with private dataset. Our analysis carefully removes the influence of natural accuracy on MI attack accuracy. • We conduct comprehensive experiments to show that our proposed method achieves SOTA MI robustness. As our method is remarkably easy to implement, we extend our experiments for a wide range of model architectures such as vision transformer (Tu et al., 2022), which MI robustness has not been studied before. 2 BACKGROUND The target model $T$ is trained on a private training dataset $D_{\text{priv}} = \{(x_i, y_i)_{i=1}^N\}$, where $x_i \in \mathbb{R}^{d_x}$ is the facial image and $y_i \in \{0, 1\}^K$ is the identity. The target classifier $T$ is a $K$-way classifier $T: \mathbb{R}^{d_x} \rightarrow \mathbb{R}^K$, with the parameters $\theta_T \in \mathbb{R}^{d_\theta}$. Under white-box MI, the adversary can access $T(x)$ and the $K$-dim vector of soft output. The classifier parameters $\theta_T$ are optimized using the main objective $L$ as Cross Entropy loss $\mathbb{E}[-\log p(y_i|x_i)]$. Model Inversion Attack. In MI attacks, an adversary exploits a target model $T$ trained on a private dataset $D_{\text{priv}}$. However, $D_{\text{priv}}$ should not be disclosed. The main goal of MI attacks is to extract information about the private samples in $D_{\text{priv}}$. The existing literature formulates MI attacks as a process of reconstructing an input $x$ that $T$ is likely to classify into the preferred class (label) $y$. This study primarily focuses on whitebox MI attacks, which are the most dangerous, and can achieve impressive attack accuracy since the attacker has complete access to the target model. For high-dimensional data like facial images, the reconstruction problem is challenging. To mitigate this issue, SOTA MI techniques suggest reducing the exploration area to the meaningful and pertinent images manifold using a GAN. The Eq. 1 generalize step of existing SOTA whitebox MI attacks (Zhang et al., 2020; Chen et al., 2021; An et al., 2022; Struppek et al., 2022; Nguyen et al., 2023). The details for SOTA MI attacks can be found in the Appx. D.3. $$w^* = \arg \min_w (-\log P_T(y|G(w)) + \lambda L_{\text{prior}}(w))$$ where $-\log P_T(y|G(w))$ denotes identity loss in MI attack, which guides the reconstructed $x = G(w)$ that is most likely to be classified as class $y$ by $T$. $G$ refers to generator to generate reconstructed data $x$ from latent vector $w$. The $L_{\text{prior}}$ is the prior loss, which makes use of public information to learn a distributional prior through a GAN. This prior is used to guide the inversion process to reconstruct meaningful images. The hyper-parameter $\lambda$ is to balance prior loss and identity loss. Model Inversion Defense. In contrast, the MI defense aims at minimizing the disclosure of training samples during the MI optimization process. First MI-specific defense strategy is MID (Wang et al., 2021b), which adds a regularization \( d(x, T(x)) \) to the main objective during the target classifier’s training to penalize the mutual information between inputs \( x \) and outputs \( T(x) \). Another approach is Bilateral Dependency Optimization (BiDO) (Peng et al., 2022), which minimizes \( d(x, f) \) to reduce the amount of information about inputs \( x \) embedded in feature representations \( f \), while maximizing \( d(f, y) \) to provide \( f \) with enough information about \( y \) to restore the natural accuracy. However, both MID and BiDO suffer from the drawback that their regularization, i.e., \( d(x, T(x)) \) for MID and \( d(x, f) \) for BiDO, conflict with the main training objective, resulting in an explicit trade-off between MI robustness and model utility. BiDO improves this trade-off with \( d(f, y) \) but is hyperparameter-sensitive due to the optimization of three objectives, making it difficult to apply. In other words, MID and BiDO reduce MI attack accuracy by suppressing likelihood \( P(y|x) \). This leads to an inevitable degrade in classification, where high likelihood \( P(y|x) \) is favorable. 3 Proposed Method: MI defense via Transfer Learning Transfer Learning (TL). TL (Pan & Yang, 2010; Yin et al., 2019) is an effective approach to leverage knowledge learned from a general task to enhance performance in a different task. By performing pre-training on a large general dataset and then fine-tuning on a target dataset, TL mitigates the demand for large labeled datasets, while simultaneously improving generalization and overall performance. In machine learning, TL works mostly focus on improving the model performance by adapting the knowledge to new tasks and domains (Jiang et al., 2022; Zhuang et al., 2020). Our proposed approach. In contrast, our work is the first to apply TL to defend against MI attacks aiming at degrading MI attack accuracy. Therefore, our study is fundamentally different from existing TL works which aim to improve model utility (Pan & Yang, 2010; Yang et al., 2019; Kumar et al., 2022; Kamath et al., 2019; Kolesnikov et al., 2019). Our idea is to apply TL to reduce the leak of private information by limiting the number of parameters updated on private training data. Specifically, as illustrate in Fig. 1, we propose to train the target model \( T \) as \( T = C \circ E \) in two stages: pre-training and then fine-tuning. Particularly, in the fine-tuning stage, \( E \) comprises parameters that are frozen, i.e., not updated by the private dataset \( D_{priv} \), while \( C \) comprises parameters that are updated by \( D_{priv} \). - **Stage 1: Pre-training with \( D_{pretrain} \)**. We first pre-train \( T \) using a dataset \( D_{pretrain} \). \( D_{pretrain} \) can be a general domain dataset, e.g., Imagenet, or it can be similar domain as the private dataset \( D_{priv} \). Importantly, \( D_{pretrain} \) has no class/identity intersection with \( D_{priv} \). Both \( C \) and \( E \) are updated based on \( D_{pretrain} \) in this stage. - **Stage 2: Fine-tuning with \( D_{priv} \)**. To adapt the pre-trained model from Stage 1 for \( D_{priv} \), we freeze \( E \), i.e. parameters of \( E \) are unchanged. We only update \( C \) with \( D_{priv} \). We remark that pre-training has already been commonly adopted in previous works of MI attack. Therefore, in many cases, our method does not incur additional overhead (Peng et al., 2022; Nguyen et al., 2023; Chen et al., 2021; Struppek et al., 2022; An et al., 2022). As an example, we consider the main setup of BiDO where VGG16 is used as the target classifier \( T \). Following the previous works on MI attack, \( T \) including \( E \) and \( C \) are first pre-trained on \( D_{pretrain} = \text{Imagenet1K} \) (Deng et al., 2009). Then, for our method, we fine-tune \( C \) with \( D_{priv} = \text{CelebA} \) (Liu et al., 2015) while \( E \) is frozen. In contrast, for other MI defense, both \( E \) and \( C \) are updated with \( D_{priv} \). We explore the design of \( T \) with different number of layers updated by \( D_{priv} \), leading to different number of parameters in \( C \) (\( |\theta_C| \)) updated by \( D_{priv} \). Using different \( |\theta_C| \), we limit the amount of private information encoded in the parameters of \( T \). We show that our approach improves MI robustness. Regarding hyperparameter in our method, we determine \( |\theta_C| \) by simply deciding at the layer-level of a deep neural network. Note that during training we use the same objective of classification task, i.e. no change in training objective is needed. Therefore, our method is much simpler and faster than SOTA MI defense BiDO (Peng et al., 2022) (see Appx. C). In Sec. 4.2, we present our Fisher Information-based analysis to justify our method. Table 1: Training procedure for “no defense”, existing MI defense methods (Wang et al., 2021b; Peng et al., 2022) and our method. Stage 1 (pre-training) is commonly used in existing methods to reduce the requirement for labeled datasets. Our method takes advantage of such setup to defend MI. | No Defense | Existing MI defenses | Our method | |------------|----------------------|------------| | Stage 1 | Train $T$ with standard objective on $\mathcal{D}_{\text{pretrain}}$ | Fine-tune only $C$ with standard objective on $\mathcal{D}_{\text{priv}}$ | | Stage 2 | Fine-tune the whole $T$ with standard objective on $\mathcal{D}_{\text{priv}}$ | Fine-tune the whole $T$ with standard objective and additional dependency minimization regularization on $\mathcal{D}_{\text{priv}}$ | 4 Exploring MI Robustness via Transfer Learning We introduce the experiment setup in Sec. 4.1. In Sec. 4.2, we provide the first analysis on layer importance for MI task via Fisher Information suggesting that earlier layers are important for MI. Then, Sec. 4.3 empirically validate that MI robustness is obtained by reducing the number of parameters fine-tuned with private dataset. With the established understandings, we then compare our proposed method with current SOTA MI defenses (Wang et al., 2021b; Peng et al., 2022) in Sec. 4.4. Additionally, since our method offer higher practicality compared with the SOTA MI defenses, we expand the scope of MI defense setups to 21 MI attack setups in Sec. 4.5 and Appx. A spanning 8 architectures, 4 private datasets $\mathcal{D}_{\text{priv}}$, 3 public datasets $\mathcal{D}_{\text{pub}}$, and 7 MI attacks. While the above sections assume a consistent pre-trained dataset $\mathcal{D}_{\text{pretrain}}$ for the target classifier to ensure fair comparison with existing works, we also delve into novel analysis on the effect of various $\mathcal{D}_{\text{pretrain}}$ on MI robustness. We observe that less similarity between pretrain and private dataset domains can improve defense effectiveness. The details for this analysis can be found in Appx. A.3 4.1 Experimental Setup To ensure a fair comparison, our study strictly follows setups in SOTA MI defense method BiDO (Peng et al., 2022) in datasets, attack methods, and network architectures. Furthermore, we also examine our defense approach with additional new datasets, recent MI attack models, and new network architectures. Note that these have not been included in BiDO. All the MI setups in our study are summarized in Tab. 2. The details for the setup can be found in Appx. D MI Defense Baseline. In order to showcase the efficacy of the our proposed method, we compare our MI defense approach with several existing SOTA model inversion defense methods, which are BiDO-COCO, BiDO-HSIC (Peng et al., 2022), and MID (Wang et al., 2021b). Evaluation Metrics. Following the previous MI defense/attack works, we adopt natural accuracy (Acc), Attack Accuracy (AttAcc), K-Nearest Neighbors Distance (KNN Dist), and $\ell_2$ distance metrics to evaluate MI robustness. Moreover, we also provide qualitative results and user study in the Appx. G 4.2 Analysis of Layer Importance for Classification Task and MI Task In this section, we provide an analysis to justify our TL-based method to render MI robustness. We aim to understand importance of individual layers for MI reconstruction task, justifying our design to prevent encoding of private data information in the first few layers as an effective method to degrade MI. We study layer importance between classification and MI tasks. To quantify the importance, we compute the Fisher Information (FI) for the two tasks for individual layers. Fisher Information (FI) based analysis. Fisher Information $F$ has been applied to measure the importance of model parameters for discriminative task (Kirkpatrick et al., 2016; Achille et al., 2019) and generative task (Li et al., 2020). For example, in (Kirkpatrick et al., 2016), FI has been applied to determine importance of model parameters to overcome catastrophic forgetting in continual learning. Our study extends FI-based analysis for model inversion, which has not been studied before. Table 2: **Setups of our comprehensive experiments.** We follow the exact setups in the previous MI attacks. Following the SOTA MI defense (Peng et al., 2022), we conduct our three experiments for ResNet-34 (He et al., 2016) for VMI (Wang et al., 2021a) and VGG16 (Simonyan & Zisserman, 2014) for KEDMI/GMI (Chen et al., 2021) with \( D_{pub} = \text{CelebA} \) (Liu et al., 2015) and \( D_{priv} = \text{CelebA} \). Furthermore, we evaluate our approach with other MI attacks (LOMMA (Nguyen et al., 2023), PPA (Struppek et al., 2022), BREPMI (Kahla et al., 2022), and MIRROR (An et al., 2022)), \( D_{pub} \) (FFHQ, AFHQ), \( D_{priv} \) (Facescrub, Stanford Dogs, VGGFace2), \( T \) (IR152 (He et al., 2016), FaceNet64 (Cheng et al., 2017), Resnet-34, Resnet-18, Resnet-50 (He et al., 2016), ResNeSt-101 (Zhang et al., 2022), and MaxViT (Tu et al., 2022)). Note that these additional MI setups have not been experimented previously in the MI defense literature. In total, there are 21 MI setups spanning 7 MI attacks, 3 \( D_{pub} \), 4 \( D_{priv} \), 8 architectures of \( T \), and 4 \( D_{pretrain} \). The experimental setups are described in more detail in the Appx. | MI attack | \( D_{pub} \) | \( D_{priv} \) | \( T \) | \( D_{pretrain} \) | |-----------------|-------------------|------------------|------------------|--------------------| | VMI | CelebA | CelebA | ResNet-34 | None | | KEDMI | | | VGG16 | Pubfig83/ Facescrub| | LOMMA/BREPMI | | | | Imagenet1K | | KEDMI/GMI | CelebA/FFHQ | CelebA | IR152/FaceNet64 | MS-CelebA-1M | | | | | VGG16 | Imagenet1K | | PPA | FFHQ | Facescrub | ResNet-18/MaxViT | Imagenet1K | | | AFHQ | Stanford Dogs | ResNeSt-101 | Imagenet1K | | MIRROR | FFHQ | VGGFace2 | ResNet-50 | | Specifically, given a model \( T \) parameterized by \( \theta_T \) and input \( X \), FI can be computed as (Kirkpatrick et al., 2016; Achille et al., 2019; Li et al., 2020): \[ F = \mathbb{E} \left[ -\frac{\partial^2}{\partial \theta_T^2} \mathcal{L}(X|\theta_T) \right] \] Here, \( \mathcal{L} \) is the loss function for a particular task. Specifically, we investigate FI on classification task and MI task. For classification, we follow (Achille et al., 2019) and (Le et al., 2021) to use cross entropy \( \mathbb{E}[-\log p(y_i|x_i)] \) as \( \mathcal{L} \) and validation set \( D_{val}^{pub} = \{(x_i, y_i)\}_{i=1}^M \) as \( X \). For MI task, we propose to use the \( \ell_2 \) distance between the feature representations of reconstructed images and the private image as \( \mathcal{L} \): \[ \mathbb{E} \left[ \left\| \Phi(\hat{x}_j^i) - \mathbb{E} \left[ \Phi(x_{priv}^j) \right] \right\|_2 \right] \] Here, for a given input image, \( \Phi \) computes the penultimate layer representation using the target model, and \( \hat{x}_j^i \) is one of the MI reconstructed images for identity \( j \), and \( \mathbb{E} \left[ \Phi(x_{priv}^j) \right] \) is the centroid feature of private image for identity \( j \). Therefore, we use the distance between MI reconstructed image and private image of the same identity as the loss in FI analysis. The set of MI reconstructed images \( \{\hat{x}_j^i\}_{i=1}^M \) for different identity is used as \( X \). We explore different setups to compute \( \mathcal{L} \), see Appx. B.1. In one setup, we perform FI analysis only at the last iteration (i.e., 3000, for the result in Fig. 1H). As we are interested in FI at the layer level, we compute the average FI of all parameters within a layer. We use the main MI attack setup in (Peng et al., 2022), i.e., VGG16 with KEDMI attack, for FI analysis. **Observation.** The FI results in Fig. 1HI clearly suggest that the first few layers of a target model are important for MI task. Meanwhile, FI analysis suggests that the first few layers do not carry important information for a specific classification task. This observation is consistent with previous finding in work (Yosinski et al., 2014) suggesting that the earlier layers carry general features. The FI analysis justifies our design to prevent encoding of private information in the first few layers in order to degrade MI attacks, while keeping the impact on classification small. Overall, this leads to improved MI robustness. Further results with different loss (\( \ell_1 \) and LPIPS (Zhang et al., 2018)) and different MI iterations can be found in Appx. B.1. 4.3 Empirical Validation As shown in Fig. 1-IV, we observe a significant improvement in MI robustness when reducing the number of parameters fine-tuned with $D_{priv}$. However, the relationship between MI attack accuracy and natural accuracy is strongly correlated (Zhang et al., 2020), which makes it unclear if the decrease in MI attack accuracy is due to the drop in natural accuracy. In this section, we empirically investigate the hypothesis that a model with fewer parameters encoding private information from $D_{priv}$ has better MI robustness. The empirical validation is reported in Fig. 1-III. Note that the number of parameters for the entire target model: $|\theta_C| = 16.8M$ for VGG16 with KEDMI setup and $|\theta_C| = 11.7M$ for Resnet-18 with PPA setup. The additional empirical validation for GMI can be found in Appx. A.4. To separate the influence of model accuracy on MI attack accuracy, we perform PPA/KEDMI attacks on different checkpoints for each training setup, varying a wide range of natural accuracy. This is presented by multiple data points on each line. The results clearly show that fine-tuning fewer parameters on $D_{priv}$ enhances MI robustness compared with fine-tuning all parameters on $D_{priv}$, regardless of the effect on natural accuracy. For instance, in the KEDMI setup, with a comparable natural accuracy of 83%, fine-tuning only $|\theta_C| = 13.9M$ reduces a third attack accuracy compared to fine-tuning $|\theta_C| = 16.8M$. The result in the PPA setup is even more supportive, where with a natural accuracy of around 91%, fine-tuning $|\theta_C| = 8.9M$ reduces the attack accuracy to 22.36% from 91.7% in $|\theta_C| = 11.7M$. Across all configurations, we observe that the fewer parameters fine-tuned on $D_{priv}$, the more robust the model. However, it is important to note that if the number of fine-tuned parameters on $D_{priv}$ is insufficient, such as $|\theta_C| = 9.1M$ for KEDMI setup, the model’s natural accuracy may drop drastically, rendering it unusable. Overall, our experiments strongly suggest that better MI robustness can be achieved by reducing the number of parameters fine-tuned on $D_{priv}$. 4.4 Comparison with SOTA MI Defense For a fair comparison, we strictly follow the setups in SOTA MI defense (Peng et al., 2022). Specifically, we compare our approach with existing SOTA MI defenses (Wang et al., 2021b; Peng et al., 2022) against KEDMI/GMI in Fig. 1-IV. MID improves MI robustness by penalizing the mutual information between inputs and outputs during the training process, which is intractable in continuous and high-dimensional settings making MID resort to mutual information approximations rather than actual quantity (Peng et al., 2022). Therefore, we need to sacrifice significant model accuracy to observe the improvement in MI robustness, which results in a poor MI robustness compared with the SOTA defense BiDO. Our proposed method is simple yet effective, achieving slightly better MI robustness than BiDO-HSIC without requiring additional conflict regularization, making it more feasible to recover model accuracy. We are the first to explore MI defense beyond the regularization perspective, therefore, our approach can be combined with SOTA MI defenses such as BiDO-HSIC. When combining with our approach, we strictly follow BiDO. The only difference is that BiDO is applied only to the unfrozen layers in the fine-tuning stage. The results in Fig. 1-IV show that the trade-off between utility and robustness is much improved when we combine two approach. Also, our method helps restore the utility degraded by BiDO, rendering a much more robust model (reducing MI attack accuracy by 27.36% from 46.23% to 18.87%) while improving model utility (increasing model accuracy by 1.8% from 80.35% to 82.15%). In the Appx. A.1, we provide additional comparison of our approach with BiDO and MID against VMI (Wang et al., 2021a). 4.5 Extensive Results on Other MI Attack Setups Our proposed method is simple, easy to implement, and less sensitive to hyperparameters than BiDO, which requires intensive grid search for hyperparameter. This significant advantage allows us to extend the scope of experimental setups for the MI defense to align with the remarkable increase in MI attack setups, which are not yet evaluated in previous MI defenses (Peng et al., 2022; Wang et al., 2021b). Results on different $D_{pub}$. We evaluate our method against KEDMI and GMI attacks on three architectures (VGG16, IR152, FaceNet64) with varying public datasets (CelebA, FFHQ), spanning Table 3: Our evaluation covers a wide range of MI attack setups, where the results are given in %. Specifically, we report the MI defense results against different MI attack methods (KEDMI and GMI), as well as using different public datasets \( D_{pub} \) (CelebA and FFHQ), and pre-trained datasets \( D_{pretrain} \) (Imagenet1K and MS-CelebA-1M). | Attack Method | \( D_{privo} \) | \( D_{pub} \) | \( D_{pretrain} \) | \( T \) | Defense Method | \( |\theta_C|/|\theta_T| \) | Acc \( \uparrow \) | Top1-AttAcc \( \downarrow \) | Top5-AttAcc \( \downarrow \) | KNN Dist \( \uparrow \) | |---------------|----------------|--------------|-------------------|------|----------------|----------------|--------|----------------|----------------|----------------|----------------| | KEDMI | ImageNet1K | VGG16 | No Def. | 16.8/16.8 | 89.00 | 90.87 ± 2.71 | 99.33 ± 0.75 | 1168 | | | Ours | 13.9/16.8 | 83.41 | 51.67 ± 3.93 | 80.33 ± 2.91 | 1410 | | CelebA | IR152 | No Def. | 62.6/62.6 | 93.52 | 94.07 ± 1.82 | 99.67 ± 0.63 | 1071 | | | Ours | 17.8/62.6 | 86.70 | 64.60 ± 4.93 | 87.67 ± 2.73 | 1333 | | MS-CelebA-1M | FaceNet64 | No Def. | 35.4/35.4 | 88.50 | 86.73 ± 2.85 | 98.33 ± 1.49 | 1194 | | | Ours | 34.4/35.4 | 83.41 | 73.40 ± 4.10 | 91.67 ± 1.92 | 1265 | | ImageNet1K | VGG16 | No Def. | 16.8/16.8 | 89.00 | 55.60 ± 3.75 | 84.67 ± 2.85 | 1407 | | | Ours | 13.9/16.8 | 83.41 | 34.53 ± 3.43 | 65.33 ± 3.36 | 1554 | | CelebA | FFHQ | IR152 | No Def. | 62.6/62.6 | 93.52 | 70.27 ± 3.40 | 89.33 ± 2.14 | 1285 | | | Ours | 17.8/62.6 | 86.70 | 46.53 ± 4.58 | 72.67 ± 3.16 | 1454 | | MS-CelebA-1M | FaceNet64 | No Def. | 35.4/35.4 | 88.50 | 57.87 ± 4.70 | 82.00 ± 3.45 | 1409 | | | Ours | 34.4/35.4 | 83.41 | 15.27 ± 4.09 | 31.00 ± 4.24 | 1751 | | ImageNet1K | VGG16 | No Def. | 16.8/16.8 | 89.00 | 30.20 ± 5.26 | 55.00 ± 5.95 | 1600 | | | Ours | 13.9/16.8 | 83.41 | 7.80 ± 3.36 | 23.33 ± 4.60 | 1845 | | CelebA | IR152 | No Def. | 62.6/62.6 | 93.52 | 40.87 ± 4.76 | 66.67 ± 5.76 | 1516 | | | Ours | 17.8/62.6 | 86.70 | 8.93 ± 3.73 | 22.67 ± 5.21 | 1819 | | MS-CelebA-1M | FaceNet64 | No Def. | 35.4/35.4 | 88.50 | 26.87 ± 3.75 | 49.00 ± 6.05 | 1643 | | | Ours | 34.4/35.4 | 83.61 | 15.73 ± 4.58 | 33.00 ± 6.28 | 1752 | | GMI | ImageNet1K | VGG16 | No Def. | 16.8/16.8 | 89.00 | 13.60 ± 4.43 | 32.00 ± 4.92 | 1725 | | | Ours | 13.9/16.8 | 83.41 | 4.27 ± 2.56 | 12.33 ± 3.44 | 1919 | | CelebA | FFHQ | IR152 | No Def. | 62.6/62.6 | 93.52 | 24.27 ± 4.24 | 45.67 ± 6.71 | 1617 | | | Ours | 17.8/62.6 | 86.70 | 6.13 ± 3.11 | 15.00 ± 4.98 | 1877 | | MS-CelebA-1M | FaceNet64 | No Def. | 35.4/35.4 | 88.50 | 13.13 ± 4.96 | 30.33 ± 5.40 | 1746 | | | Ours | 34.4/35.4 | 83.61 | 2.60 ± 1.49 | 8.67 ± 3.64 | 2009 | 12 facial domain MI setups. These are standard setups in KEDMI/GMI, however, only 2 out of 12 setups examined in the current SOTA MI defense were presented in (Peng et al., 2022). The result in Tab. 3 demonstrate that our approach consistently achieves significantly more robust models across all setups while maintaining acceptable natural accuracy, with significant improvements in robustness across a wide range of attack scenarios (13.33%-42.60% for KEDMI, 11.14%-31.94% for GMI). On average, our method with the decrease of 5.77% in natural accuracy, however, it significantly reduces the accuracy of MI attacks by more than half. Results on SOTA MI attacks. Given the remarkable advancements in MI attack research, we also provide our defense results against SOTA MI attacks (Nguyen et al., 2023; Kahla et al., 2022) on both 64 × 64 images (Tab. 7) and 224 × 224 images (Struppek et al., 2022; An et al., 2022) (Tab. 4). To the best of our knowledge, our work is the first MI defense approach against such high resolution MI attack. When addressing low-resolution MI attacks in Tab. 7, all existing defenses have suffered in natural accuracy, and our method has suffered the least in natural accuracy while reducing the most in attack accuracy. Consequently, our method achieves the best MI robustness trade-off, which can be quantified by the ratio of drop in attack accuracy to drop in natural accuracy (the larger is the ratio, the better is MI robustness trade-off). In the context of high-resolution MI attacks in Tab. 4, the results are even more encouraging. We observe only a small reduction in natural accuracy, while the attack accuracy experiences a significant drop. Additional MI defense results against BREPMI (Kahla et al., 2022) can be found in Appx. A.2. Results on different architectures of \( T \). As discussed, our approach is architecture-agnostic and does not require an intensive grid search for hyperparameters selection for one particular architecture. Therefore, our approach offer a higher practicability to other architectures compared with SOTA MI defense (Peng et al., 2022). In addition to the standard VGG16 architecture, we conducted evaluations on a range of other architectures, including residual-based networks such as ResNet-18, ResNet-50, ResNeSt-101, IR152, as well as the more recent MaxViT architecture (Tu et al., 2022). Across all these experiments in Tab. 4 and Tab. 7, our proposed MI defense consistently demonstrated superior performance, highlighting its effectiveness and versatility across various architectures. Table 4: Empirical results for current SOTA MI attacks on 224x224 images. We strictly follow experimental setups from PPA and MIRROR, where the results are given in %. Our approach successfully defends against SOTA MI attacks on high resolution 224x224. To train our defense models, we set $|\theta_C| = 8.9M/18.3M/27.9M/32.9M$ for $T = \text{ResNet-18}/\text{MaxViT}/\text{ResNeSt-101}/\text{ResNet-50}$, respectively. | Attack Method | $D_{priv}$ | $T$ | Defense | Acc ↑ | AttAcc ↓ | $\delta_{Eval}$ ↑ | $\delta_{FaceNet}$ ↑ | $\ell_2$ Dist ↑ | FID ↑ | |---------------|------------|-----|---------|-------|-----------|------------------|------------------|--------------|------| | PPA | | | No Def. | 94.22 | 88.46 | 123.85 | 0.7441 | - | 41.73 | | | | | Ours | 91.12 | 22.36 | 167.44 | 1.0229 | - | 53.71 | | | ResNet-18 | | No Def. | 96.57 | 79.63 | 128.46 | 0.7775 | - | 50.37 | | | | | Ours | 93.01 | 21.17 | 168.85 | 1.0199 | - | 55.50 | | | MaxViT | | No Def. | 75.07 | 91.90 | 62.56 | - | - | 33.69 | | | | | Ours | 79.54 | 60.88 | 83.57 | - | - | 46.01 | | | Stanford Dogs | ResNeSt-101 | No Def. | 99.44 | 84.00 | - | - | 602.41 | - | | | | | Ours | 99.40 | 50.00 | - | - | 650.28 | - | | MIRROR | VGGFace2 | ResNet-50 | No Def. | 89.00 | 95.67 | - | - | 1158.27 | - | | | | | Ours | 80.35 (-8.65) | 70.47 (-25.20) | 2.91 | 1293.25 | | | | | Ours $|\theta_C| = 13.9M$ | 83.41 (-5.59) | 75.67 (-19.67) | 3.58 | 1303.65 | | | | | Ours $|\theta_C| = 11.5M$ | 78.86 (-10.14) | 59.68 (-35.99) | 3.54 | 1370.67 | Table 5: Empirical results for current SOTA MI attacks on 64x64 images, where the results are given in %. Following the exact experimental setups from LOMMA, $D_{priv} = \text{CelebA}$, $D_{pub} = \text{CelebA}$, evaluation model = FaceNet, and target classifier $T = \text{VGG16}$, there are a total of 300 attack classes. Our approach achieves better trade-off between Acc and AttAcc. $\Delta_{AttAcc}$ and $\Delta_{Acc}$ are computed by comparing to No Def. | Defense | Acc ↑ | AttAcc ↓ | $\Delta_{AttAcc}$ ↑ | $\Delta_{Acc}$ ↑ | KNN ↑ | |---------|-------|----------|---------------------|------------------|-------| | No Def. | 89.00 | 95.67 | - | - | 1158.27 | | BiDO | 80.35 (-8.65) | 70.47 (-25.20) | 2.91 | 1293.25 | | Ours $|\theta_C| = 13.9M$ | 83.41 (-5.59) | 75.67 (-19.67) | 3.58 | 1303.65 | | Ours $|\theta_C| = 11.5M$ | 78.86 (-10.14) | 59.68 (-35.99) | 3.54 | 1370.67 | Result on different $D_{priv}$. While the SOTA MI defense (Peng et al., 2022) primarily concentrates on the facial dataset CelebA as $D_{priv}$, we extend our examination on large-scale facial datasets, such as Facescrub (Ng & Winkler, 2014) and VGGFace2 (Cao et al., 2018). Furthermore, we go beyond the facial domain by studying on the animal domain, i.e., Stanford Dogs dataset (Khosla et al., 2011). Via our comprehensive evaluation, we find that our approach consistently demonstrates its efficacy across various datasets, regardless multiple factors such as the number of training/attack classes or the specific domain under consideration. This versatility highlights the robustness and adaptability of our MI defense method across a wide range of scenarios. In conclusion, all these extensive results consistently support that our method is effective in defending against advanced MI attacks with minimal changes to the original training of target classifier $T$. 5 CONCLUSION In this paper, we propose a simple and highly effective MI defense based on transfer learning (TL). Our method is a major departure from existing MI defense based on dependency minimization regularization. Our main idea is to leverage TL to limit the number of layers encoding private data information, thereby degrading the performance of MI attacks. To justify our method, we conduct the first study to analyze layer importance for MI task via Fisher Information. Our analysis results suggest that the first few layers are important for MI, justifying our design to prevent private information encoded in the first few layers. Our method is remarkably simple to implement. Through extensive experiments, we demonstrate SOTA effectiveness of our approach across 21 MI setups spanning 8 architectures, 4 private datasets $D_{priv}$, and 7 MI attacks. Limitation. Following other MI attack/defense research, our focus is on classification. Our future work studies MI attack and defense for other machine learning tasks, e.g. object detection. REFERENCES Alessandro Achille, Michael Lam, Rahul Tewari, Avinash Ravichandran, Subhransu Maji, Charless C Fowlkes, Stefano Soatto, and Pietro Perona. Task2vec: Task embedding for meta-learning. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 6430–6439, 2019. Shengwei An, Guanhong Tao, Qiulong Xu, Yingqi Liu, Guangyu Shen, Yuan Yao, Jingwei Xu, and Xiangyu Zhang. Mirror: Model inversion for deep learning network with high fidelity. In Proceedings of the 29th Network and Distributed System Security Symposium, 2022. Qiong Cao, Li Shen, Weidi Xie, Omkar M Parkhi, and Andrew Zisserman. Vggface2: A dataset for recognising faces across pose and age. In 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018), pp. 67–74. IEEE, 2018. Xuankai Chang, Wangyou Zhang, Yanmin Qian, Jonathan Le Roux, and Shinji Watanabe. End-to-end multi-speaker speech recognition with transformer. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6134–6138. IEEE, 2020. Si Chen, Mostafa Kahla, Ruoxi Jia, and Guo-Jun Qi. Knowledge-enriched distributional model inversion attacks. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 16178–16187, 2021. Yu Cheng, Jian Zhao, Zhecan Wang, Yan Xu, Karlekar Jayashree, Shengmei Shen, and Jiashi Feng. Know you at one glance: A compact vector representation for low-shot learning. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 1924–1932, 2017. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Ieee, 2009. Jonas Dippel, Steffen Vogler, and Johannes Höhne. Towards fine-grained visual representations by combining contrastive learning with image reconstruction and attention-weighted pooling. arXiv preprint arXiv:2104.04323, 2021. Benoit Dufumier, Pietro Gori, Julie Victor, Antoine Grigis, Michele Wessa, Paolo Brambilla, Pauline Favre, Mircea Polosan, Colm Mcdonald, Camille Marie Piguet, et al. Contrastive learning with continuous proxy meta-data for 3d mri classification. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part II 24, pp. 58–68. Springer, 2021. Matthew Fredrikson, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. Privacy in pharmacogenetics: An {End-to-End} case study of personalized warfarin dosing. In 23rd USENIX Security Symposium (USENIX Security 14), pp. 17–32, 2014. Arthur Gretton, Olivier Bousquet, Alex Smola, and Bernhard Schölkopf. Measuring statistical dependence with hilbert-schmidt norms. In Algorithmic Learning Theory: 16th International Conference, ALT 2005, Singapore, October 8-11, 2005. Proceedings 16, pp. 63–77. Springer, 2005a. Arthur Gretton, Ralf Herbrich, Alexander Smola, Olivier Bousquet, Bernhard Schölkopf, et al. Kernel methods for measuring independence. 2005b. Jianzhu Guo, Xiangyu Zhu, Chenxu Zhao, Dong Cao, Zhen Lei, and Stan Z Li. Learning meta face recognition in unseen domains. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6163–6172, 2020. Yandong Guo, Lei Zhang, Yuxiao Hu, Xiaodong He, and Jianfeng Gao. Ms-celeb-1m: A dataset and benchmark for large-scale face recognition. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part III 14, pp. 87–102. Springer, 2016. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
bgyWXX8HCk
How does this work compare: Efficient Representation of Numerical Optimization Problems for {SNARKs}. Angel et al. 2022. I also see several papers in federated learning that leverage SNARKs for enhancing trust. If models involving softmax or similar steps to the presently-experimented ones were used, I imagine they would have similar accuracy problems. Are their solutions applicable?
TRUSTLESS AUDITS WITHOUT REVEALING DATA OR MODELS Anonymous authors Paper under double-blind review ABSTRACT There is an increasing conflict between business incentives to hide models and data as trade secrets, and the societal need for algorithmic transparency. For example, a rightsholder wishing to know whether their copyrighted works have been used during training must convince the model provider to allow a third party to audit the model and data. Finding a mutually agreeable third party is difficult, and the associated costs often make this approach impractical. In this work, we show that it is possible to simultaneously allow model providers to keep their model weights (but not architecture) and data secret while allowing other parties to trustlessly audit model and data properties. We do this by designing a protocol called ZKAUDIT in which model providers publish cryptographic commitments of datasets and model weights, alongside a zero-knowledge proof (ZKP) certifying that published commitments are derived from training the model. Model providers can then respond to audit requests by privately computing any function $F$ of the dataset (or model) and releasing the output of $F$ alongside another ZKP certifying the correct execution of $F$. To enable ZKAUDIT, we develop new methods of computing ZKPs for SGD on modern neural nets for simple recommender systems and image classification models capable of high accuracies on ImageNet. Empirically, we show it is possible to provide trustless audits of DNNs, including copyright, censorship, and counterfactual audits with little to no loss in accuracy. 1 INTRODUCTION As ML models become more capable, businesses are incentivized to keep the model weights and datasets proprietary. For example, Twitter recently released their algorithm but not the model weights [Twitter (2023)], and many LLM providers only provide access via APIs. On the other hand, there is also an increasing societal need for transparency in the model and data behind these APIs: closed models harm transparency and trust [Bell (2023)]. To address this, we want to perform audits where model providers and users agree on specific properties to test for the ML training procedure and models. For example, the property that the training dataset contains no copyrighted content or a recommender system is not censoring items (e.g., tweets). An audit would ideally release results for exactly these properties and nothing else. Currently, there are three methods of performing such audits on modern ML methods. One method is to release the data, random seed, and final model weights: the user can replay training. However, this procedure does not keep the data and weights hidden. Another method is multi-party computation (MPC), which allows several parties to participate in a computation while keeping data hidden. Unfortunately, MPC requires all participants to participate honestly, but an audit presupposes a lack of trust between the model provider and users. MPC that handles malicious adversaries is extremely bandwidth intensive: a back-of-the-order calculation suggests the training procedure for an 8-layer CNN may take up to 5 petabytes of communication, which would cost $450,000 in cloud egress fees [Pentyala et al. (2021)]. Finally, a trusted third party (TTP) could perform the audit, but TTPs are rarely practical. The TTP must have access to trade secrets, which model providers wish to keep secret. Furthermore, audits are expensive, requiring highly specialized expertise (deep understanding of ML training), and strong security (to avoid leaking trade secrets). In many cases, no viable TTPs are trusted by both model providers and users. Prior work has proposed using zero-knowledge proofs to perform audits to address this issue [Kroll (2015), Shamsabadi et al. (2022)]. Zero-knowledge proofs allow a prover to prove properties about their data (e.g., training data or model weights) without revealing the data itself. However, none of this prior research extends to modern ML methods in the form of deep neural networks (DNNs). In this work, we develop an auditing procedure ZKAUDIT that can perform audits without third parties and without any assumptions of trust (i.e., trustlessly) on modern DNNs. ZKAUDIT, via zero-knowledge proofs, allows a model provider to selectively reveal properties of the training data and model without a TTP such that any party can verify the proof after the fact (i.e., the audit is non-interactive). Importantly, these guarantees are unconditional, providing security against malicious adversaries with only standard cryptographic assumptions. ZKAUDIT consists of two steps: ZKAUDIT-T and ZKAUDIT-I. In ZKAUDIT-T, the model provider trains a model and publishes a commitment (e.g., hash) of their dataset, model weights, and a zero-knowledge proof that proves the weights were generated by training on the committed dataset. Then, in ZKAUDIT-I, the user provides an arbitrary audit function $F$. The model provider executes $F$ on the same weights and dataset used in training and provides a zero-knowledge proof of the execution. The zero-knowledge proof guarantees that $F(data, weights)$ was executed on the hidden model/weights and evaluated honestly. For example, $F$ could check whether a training set contains copyrighted data or whether a social media provider is shadowbanning posts. The model provider can trustlessly evaluate $F$ by using prior work to generate zero-knowledge proofs for inference [Lee et al. (2020); Weng et al. (2022); Feng et al. (2021); Kang et al. (2022)]. To enable ZKAUDIT, we leverage recent development in cryptographic techniques known as ZK-SNARKs (zero-knowledge succinct non-interactive argument of knowledge). ZK-SNARKs allow a prover to produce a proof that an arbitrary computation happened correctly (Section 2). However, ZK-SNARKs are incredibly costly: it can take up to days to prove the execution of the forward pass on even toy models [Lee et al. (2020); Weng et al. (2022); Feng et al. (2021)]. Only recently has it become possible to produce proofs of the forward pass on real-world models [Kang et al. (2022)]. However, no existing work can compute the backward pass necessary for gradient descent, a requirement for proofs of training. To produce proofs of training, we extend recent work to compute the backward pass of real-world DNNs. Our work enables model providers to produce proofs of full stochastic gradient descent on private data. Doing so requires overcoming several challenges: prior work uses integer division and int8 precision for efficient forward pass computation. Unfortunately, training with these settings is not amenable to achieving high accuracy. We provide methods of embedding stochastic gradient descent with rounded division and variable fixed-point precision, and show that training in fixed-point can achieve high accuracy. On commodity hardware, ZKAUDIT can produce audits of image classification systems and simple recommender systems with little to no loss in accuracy on a range of real-world datasets (medical datasets and standard benchmark datasets). The cost of auditing a recommender system and image classification system can be as low as $10 and $108, respectively, showing the practicality of our work. Achieving these low costs requires all of our optimizations: training would suffer dramatic losses in accuracy or not proceed without them. 2 BACKGROUND ON ZK-SNARKS ZK-SNARKs. ZK-SNARKs are a cryptographic primitive that allows a prover to produce a proof $\pi$ that some function $F(x,w)$ was computed correctly, where $x$ is public and $w$ is private. Given $\pi$, a verifier can check that the prover computed $F$ correctly without access to $w$. ZK-SNARKs have several amazing properties. First, they are succinct, i.e., small in the size of the input. Second, they are non-interactive, meaning the prover and verifier need not interact beyond $\pi$. Third, they are knowledge sound, which means that a computationally bounded prover cannot generate proofs for incorrect executions. Fourth, they are complete, meaning that proofs of correct execution verify (often unconditionally). Finally, they are zero-knowledge, which means $\pi$ reveals nothing about the private inputs beyond what the output and public inputs contain. Although ZK-SNARKs allow arbitrary computation to be proved, ZK-SNARKs require computation to be expressed in specific ways. The cryptography community has provided several such ways of expressing computations, including R1CS [Groth (2016)] and Plonk [Gabizon et al. (2019)]. Unfortunately, naively expressing computations can result in highly inefficient proof generation. The specification of the computation and proving systems can jointly result in three orders of magnitude or more differences in proving times. **Representing computation.** We describe salient details of representing computation in ZK-SNARKs. Although other works describe relevant details, it is critical to understand the basic building blocks and costs associated with computation in ZK-SNARKs to understand our optimizations. In this work, we leverage arithmetic intermediate representations (AIRs), represented by a 2D grid \( x_{ij} \) of values. We denote the number of rows as \( R \) and columns as \( C \). Due to the construction of ZK-SNARKs, the \( x_{ij} \in \mathbb{F}_q \) for some large prime \( q \). In particular, arithmetic is done in the finite field, so standard operations such as division are not natively possible. Logically, there are three ways to constrain values on the grid: 1. Constraining two values to be equal: \( x_{ij} = x_{i'j'} \). 2. Constraining a subset of a row to be in a pre-defined table: \( (x_{ij_1},...,x_{ij_k}) \in \{(t_1,...,t_k)\} = T_m \) for some table \( T_m \). \( T_m \) is called a lookup table. 3. A polynomial constraint on a grid row: \( f_t(x_{i1},...,x_{iR}) = 0 \) for some polynomial \( f_t \). We provide an example of using polynomial constraints to implement integer division in Section 4, and Kang et al. (2022) provide other examples. In general, the costs increase with the number of rows (\( R \)), number of columns (\( C \)), maximum degree of the polynomial constraints \( f_t \), and number of lookup tables (\( T_m \)). Furthermore, the number of rows must be a power of two. Given a representation of a computation in an AIR, we can produce a concrete ZK-SNARK proof by using a proving system such as halo2 [zcash (2022)]. We provide an extended discussion of ZK-SNARKs in Appendix A, including examples of using AIRs and how to compile ZK-SNARKs. ### 3 ZKAudit: Private Audits of ML **Protocol.** We describe ZKAudit when given access to verified randomness, a public source of timestamped, verified random bits. The need for verified random bits can be removed with the slightly stronger assumption of a random oracle hash function, which we describe in Appendix B.1. Throughout, we assume access to a binding and hiding commitment scheme, in which the trainer commits to the training data and cannot change the values later. The commitment scheme can be implemented in practice by publicly releasing hashes of the data and weights. The first part of ZKAudit (ZKAudit-T) proves that the trainer honestly trained a model with known architecture but hidden weights on a hidden dataset. To do so, the trainer commits to the data, commits to a training order, and produces ZK-SNARKs for SGD from a randomly initialized or public pre-trained model: 1. The trainer commits to a dataset \( \{(x_1,y_1),...(x_n,y_n)\} \), producing commitments \([c_1,...,c_n]\). The commitments are ordered lexicographically, and the trainer publicly posts the commitments. 2. The trainer uses a verified source of randomness to generate a traversal ordering of the dataset (see Appendix B for why this is desired in some circumstances). 3. The trainer computes ZK-SNARKs of the SGD process, one batch at a time, using the traversal ordering. To do so, it computes the ZK-SNARK of the forward pass of any frozen layers, the forward and backward pass of any layers being updated, and the weight update procedure. This can be done in one or more ZK-SNARKs. 4. The trainer publishes the ZK-SNARKs of SGD and the commitment to the model weights at the end of training. The second part of the protocol (ZKAUDIT-I) computes the zero-knowledge proof(s) for the audit function itself. Given the commitments to the dataset and final weights, the user sends an audit function \( F(\text{data}, \text{weights}) \). The trainer then computes a ZK-SNARK of the audit function and publishes it (along with the output of \( F \)). For example, the audit may be that a recommender system is not censoring social media content. The model trainer must also hash the weights in the zero-knowledge proof to ensure trained weights from ZKAUDIT-T are consistent with the weights in ZKAUDIT-I. **Security analysis.** ZKAUDIT has the following (informal) properties: 1) the trainer cannot “cheat” in training or computing the audit function, and 2) the verifier learns nothing about the training data and model weights aside from the output of the audit function. We can formalize these properties as knowledge soundness and zero-knowledge. We provide a formal analysis of security (knowledge soundness and zero-knowledge) in Appendix B and provide an informal analysis here. For our security analysis, we assume two standard cryptographic primitives: a cryptographically secure hash function (informally, one that is secure against collisions) [Menezes et al., 2018] and ZK-SNARKs [Bitansky et al., 2017]. It is standard to denote the security of these primitives with a parameter \( \lambda \). Informally, the security parameter controls the probability that an adversary can “break” the protocol. Denote the dataset size as \( D \) and the number of stochastic gradient steps as \( T \). Then, the prover produces at most \( D + 4T \) hashes, commitments, and ZK-SNARKs. The security of each hash and ZK-SNARK follows directly from the primitives. By the union bound, in order to achieve a security parameter of \( \lambda \) for ZKAUDIT, we must choose the hash function and ZK-SNARK parameters so they have at least \((D + 4T)\lambda\) bits of security. **Security of ZK-SNARKs.** ZKAUDIT’s security rests on the security of the underlying ZK-SNARK proving backend that is used (halo2 [zcash], 2022 in this work). Our ZK-SNARKs can be constructed via KZG commitments [Kate et al., 2010] or inner-product arguments (IPA) [Bünz et al., 2021]. In the KZG version, we require a structured-reference string (SRS) that is universal to all audit functions. Namely, the SRS need only be generated once in a secure manner. To do so, we can use the already-generated SRS, which was generated using a perpetual trusted setup in which many parties participate (over 75 at the time of writing) [PSE, 2023]. Only a single party needs to be honest for the setup to be secure. IPAs do not require any trusted setup. **Limitations.** Although ZKAUDIT provides computational security against malicious adversaries and traversal ordering attacks, it has two major limitations. First, it does not protect against data poisoning attacks, in which a malicious attacker can manipulate the data itself [Steinhardt et al., 2017]. Second, while ZKAUDIT does not reveal the weights, it does reveal the model architecture. We view addressing these limitations as exciting future research. ### 4 Computing ZK-SNARKs for Gradient Descent We now describe our method and optimizations for computing gradient descent within a ZK-SNARK. Unlike in the computation of the forward pass, the input to gradient descent is both the input data and the model weights. The output is an updated set of weights. Formally, for an input \( x \) and weights \( w \), gradient descent computes \( w' = G(x, w) \), where \( w' \) is the updated set of weights. One standard method of performing gradient descent is to compute the forward pass, compute the backward pass, and update the weights by scaling the gradients by the learning rate. Prior work has optimized the forward pass for int8 inference in ZK-SNARKs [Kang et al., 2022]. In this work, we extend this prior work by showing how to compute the backward pass in a ZK-SNARK. We further optimize gradient descent by designing a high-performance softmax in ZK-SNARKs and operating in fixed-point arithmetic. We first observe that the backward pass can often be expressed in structurally similar computation as the forward pass. For example, the backward pass of a convolution can also be expressed as a convolution (with different inputs). However, several significant differences between inference and training necessitate changes. Rounded division and fixed-point. Training requires more precise arithmetic than inference. For efficiency, prior work [Kang et al. (2022)] uses the floor function for int8 arithmetic, which would result in poor accuracy for training. To understand why, consider the update formula for SGD: \( w' = w + \eta \cdot \Delta w \). Typically, the learning rate \( \eta \) is small (e.g., 0.01). When using lower precision, the multiplication by \( \eta \) can be imprecise, leading to poor accuracy. Thus, in order to compute ZK-SNARKs for gradient descent, we introduce two techniques: rounded division in finite field ZK-SNARK constraints and variable precision fixed-point arithmetic. Both techniques increase the accuracy of training. We first implement rounded division in finite fields with polynomial constraints. As we show (Section 5), using rounded division can improve accuracy by up to 11% compared to standard integer division. We first describe how to implement standard integer division (which rounds towards zero). Suppose that \( a, b, c, r \) are all positive. If \( b = \left\lfloor \frac{a}{c} \right\rfloor \) then we have that the following constraint holds \[ a = b \cdot c + r \tag{1} \] where \( 0 \leq r < c \). To implement standard integer division, we first assume that \( 0 \leq b, c < 2^N \) for some \( N \). We can then use the polynomial constraint in Equation 1, constrain that \( b, c, r \in \{0, \ldots, 2^N - 1\} \), and that \( c - r \in \{0, \ldots, 2^N - 1\} \). Constraining that \( c - r \in \{0, \ldots, 2^N - 1\} \) is equivalent to the constraint that \( c > r \). To implement rounded division, consider \( a, b, c, r \) all positive integers. As before, we assume that \( 0 \leq b, c < 2^N \) for some \( N \). Let \( b = \left\lfloor \frac{a}{c} \right\rfloor \). Then, the following constraints specify rounded division \[ 2a + c = 2c \cdot b + r \tag{2} \] where \( 0 \leq r < 2c \). This follows because \[ b = \left\lfloor \frac{2a + c}{2c} \right\rfloor = \left\lfloor \frac{a}{c} + \frac{1}{2} \right\rfloor. \] We can use similar constraints: Equation 2, a constraint that \( b, c \in \{0, \ldots, 2^N - 1\} \), and a constraint that \( 2c - r \in \{0, \ldots, 2^{2N} - 1\} \). Although this requires a lookup table of size \( 2^{2N} \), we can implement rounded division, which is critical for training. We further implement variable precision fixed-point arithmetic to allow trade-offs between accuracy and computation. Fixed-point arithmetic approximates real numbers by \( \hat{x} = \text{Round}(x \cdot SF) \), where SF is the scale factor. Since we use lookup tables to allow for non-linearities and fixed-point rescaling, more precision (i.e., a larger scale factor) results in larger lookup tables. This directly results in higher proving times but allows the model trainer to decide which precision level to use. Softmax. In order to perform classification, we designed a high-performance softmax in ZK-SNARKs. To understand the difficulties of implementing softmax with finite field operations, recall the explicit formula: \( y_i = \frac{e^{x_i}}{\sum_j e^{x_j}} \). Denote \( s = \sum_j e^{x_j} \) and \( \hat{e} = [e^{x_i}] \). Naively computing the softmax using the operations present would compute the exponential with a lookup table in the standard units by scaling by the scale factor, sum \( e^{x_i} \), then divide by \( s \). However, we must address three challenges when using fixed-point arithmetic to compute the softmax: underflow, precision, and range. To understand these issues, consider a toy example where \( x = [\ln \frac{1}{2}, 0] \), so \( \hat{e} = [\frac{1}{2}, 1] \), \( s = \frac{3}{2} \), and \( y = [\frac{1}{3}, \frac{2}{3}] \) in full precision. Consider using a scale factor of 1000 for simplicity. The first issue arises in dividing by \( s \): in the scaled units, \( \hat{e} = [500, 1000] \), so \( s = 1500 \). However, when dividing in the scaled units, \( y = [0, 1] \). Thus, naive computation would result in a substantial loss in precision (underflow). We can address the issue of dividing \( s \) by the scale factor. However, this results in \( s = 2 \) and \( y = [250, 500] \) in scaled units or \( [\frac{1}{4}, \frac{1}{2}] \). This results in a relative error of 33% in \( y \), a substantial degradation in accuracy. In order to address this, we can scale \( e^{x_i} \) by the scale factor again and not divide \( s \) by the scale factor. Finally, we use a standard trick to increase the numeric stability of the softmax. Since the softmax is shift-invariant, subtracting the maximum value results in a smaller range of outputs of the exponentiation. To compute the maximum of a vector, we can compute the pairwise maximum sequentially. In order to compute the pairwise maximum \( c = \max(a, b) \) efficiently, we can use the following constraints. First, we constrain that \( c \) is one of \( a \) or \( b \) by using the polynomial constraint \((c - a) \cdot (c - b) = 0 \). We then constrain that \( c - a, c - b \in \{0, \ldots, 2^N\} \), where \( 2^N \) is the size of our lookup table. This enforces that \( c \geq a, b \). | Scale factor | Proving time | Verification time | |--------------|--------------|-------------------| | $2^{12}$ | 47.5 s | 10.0 ms | | $2^{13}$ | 87.8 s | 9.9 ms | | $2^{14}$ | 167.0 s | 9.9 ms | | $2^{15}$ | 328.3 s | 9.8 ms | Table 1: Proving and verification time of SGD on scale factors for image classification on a single image (MobileNet v2 (1.0, 224)). The proof size was 9.03 kb for all configurations. | Scale factor | Proving time | Verification time | |--------------|--------------|-------------------| | $2^{11}$ | 3.16 s | 6.2 ms | | $2^{12}$ | 5.54 s | 6.1 ms | | $2^{13}$ | 10.49 s | 6.3 ms | | $2^{14}$ | 23.79 s | 6.0 ms | Table 2: Proving time and verification time of SGD on a variety of scale factors for a recommender system (single example). The proof size was 4.6 kb for all configurations. | Method | Proving lower bound | |--------|---------------------| | Zen | 200,000*s | | vCNN | 172,800 s | | pvCNN | 31,011*s | Table 3: Estimated lower bounds for proving times of prior work for image classification on a single image. We exclude zkCNN since the authors explicit state that they are unable to compute the softmax [Liu et al., 2021], so are unable to compute proofs of SGD. ## 5 Evaluation of ZKAudit-T We now evaluate ZKAudit-T, including the performance of performing SGD in ZK-SNARKs, the end-to-end accuracy and costs of ZKAudit-T, and the effect of our optimizations. Because verification of ZK-SNARKs is cheap (10ms per proof), we focus on the cost of proving, which far dominates, and the accuracy (since there are potential degradations when discretizing). We benchmarked SGD and ZKAudit-T on image classification and a recommender system on Movie Lens [Harper & Konstan, 2015]. For the image classification tasks, we used a variety of MobileNet v2 configurations. The MobileNet configurations are denoted by the depth multiplier and input resolution, so MobileNet (1.0, 224) is a MobileNet v2 with a depth multiplier of 1.0 and an input resolution of $224 \times 224$. For the recommender system, we used a small model based on the DLRM model from Facebook [Naumov et al., 2019]. The complete configuration is in Appendix D.1. To generate the cost estimates, we multiplied the total computation time by the cost of using a cloud computing platform (AWS) to perform the computation (Appendix D.1). We further conducted experiments on CIFAR-10 in the Appendix. ### 5.1 Performance of SGD We first investigated the performance of embedding the computation of a single SGD step in a ZK-SNARK. We measure the proving time, verification time, and proof size. We show results for image classification in Table 1 and for the recommender system in Table 2. The verification costs for ZK-SNARKs of SGD are incredibly low: as low as 6.0 ms. The proving times range from 26s to 328s for image classification and 2 s to 48s for the recommender system. Furthermore, none of the prior work implements the softmax operation, making SGD infeasible with this work. Nonetheless, comparing the proving times of our work to the proving times of just the arithmetic operations of prior work shows that our work is at least 95× faster (Table 3). ### 5.2 End-to-End Accuracy and Costs We then benchmarked the end-to-end accuracy of fine-tuning and costs. To do so, we chose three image classification datasets and one recommender system dataset. The image classification datasets ranged in task complexity, number of examples, and classes. We used the following datasets: 1. **dermnet** [Shanthi et al., 2020]: a dataset of skin images, where the task was to determine which one of 23 diseases the image belonged to. There were 15,557 training images. 2. **flowers-102** [Nilsback & Zisserman, 2008]: a dataset of flower images, where the task was to classify images into one of 102 flower classes. There were 1,020 training images. Figure 1: Test accuracy vs cost of proving training across the entire dataset for the Pareto frontier of image classification. Higher is better. The dashed line is the fp32 accuracy. | Dataset | Accuracy (fixed-point) | Accuracy (fp32) | Difference | |-------------|------------------------|-----------------|------------| | dermnet | 38.5% | 39.0% | -0.5% | | flowers-102 | 79.7% | 80.4% | -0.7% | | cars | 49.8% | 50.4% | -0.6% | Table 4: Test accuracy of training with ZKAUDIT-T compared to full fp32 accuracy. The loss in accuracy is marginal across datasets. Figure 2: Test MSE vs total training cost for the Pareto frontier for the recommender system. Lower is better. Figure 3: Test MSE vs scale factor. ZKAUDIT-T achieves parity with fp32 at $2^{13}$. 3. cars [Krause et al. (2013)]: a dataset of car images, where the task was to classify cars into one of 196 categories. There were 8,144 training images. 4. movielens [Harper & Konstan (2015)]: a dataset of users ranking movies in IMDB. The training set has 6,040 users, 3,706 movies, and 900,188 ratings. We estimated the costs of end-to-end verified training using the ZKAUDIT-T protocol by performing the full training procedure and estimating the cost of constructing ZK-SNARKs of the training run. We used a variety of MobileNet configurations and hyperparameters for image classification. We fixed the architecture for the recommender system but varied the hyperparameters. We show the Pareto optimal frontier of accuracy and cost for the three image datasets in Figure 1 and the mean-squared error (MSE) for the recommender system in Figure 2. As shown, ZKAUDIT-T can smoothly trade off between accuracy and costs. Furthermore, ZKAUDIT-T can achieve high accuracy on all four datasets despite using fixed-point arithmetic. Although the costs are high, practitioners can trade off between accuracy and proving costs. For example, privacy is required in a regulated medical setting, so the model cannot be revealed. However, for regulatory reasons, the model provider may desire to provide a transcript of training. In this setting, the model provider may want to achieve as high accuracy as possible. However, for some settings where small amounts of accuracy can be traded off for costs, a model provider can use ZKAUDIT-T for as little as $282 within 1% of fp32 accuracy. Furthermore, we substantially improve over prior work. Even ignoring the softmax, the cost of the next cheapest method would be $26,637 or 94× higher. We further compare the accuracy when using standard fp32 training. As shown in Table 4, the accuracy is close to the full precision counterpart. The recommender system’s mean-squared error is on parity with full fp32 accuracy. | Model | Accuracy (int division) | Accuracy (rounded, ZkAUDIT) | Difference | |------------------------|-------------------------|-----------------------------|------------| | MobileNet, 0.35, 96 | 59.1% | 70.4% | 11.3% | | MobileNet, 0.5, 224 | 75.7% | 86.3% | 10.6% | | MobileNet, 0.75, 192 | 79.2% | 88.8% | 9.6% | Table 5: Test accuracy (top-5) of models used by Kang et al. (2022) on ImageNet with rounded vs integer division. Integer division considerably hurts accuracy, indicating worse downstream fine-tuning. Figure 4: Test accuracy vs scale factor. As shown, we can achieve within 0.7% accuracy compared to full precision with a scale factor of $2^{15}$. The accuracy degrades with lower scale factors. 5.3 Effects of Optimizations We investigated the effects of our optimizations: our improved softmax, rounded division, and precision. To do so, we removed our optimized softmax, removed rounded division, and reduced the precision (separately) to see the effects. Removing our optimizations for softmax resulted in failure to train, as the range of the intermediate values was outside of the feasible range in the ZK-SNARK. Furthermore, no other work in ZK-SNARKs can perform the softmax, making training infeasible. We then changed standard rounded division to integer division (rounded down) and computed the accuracy of the models on ImageNet as used by Kang et al. (2022). As shown in Table 5, the accuracy can drop as much as 11.3%. Since lower accuracy on ImageNet indicates lower performance for fine-tuning, we used rounded division for all further experiments. We then reduced the training precision from a scale factor of $2^{15}$ to 1 for the image datasets. We reduced the training precision for the recommender system dataset from $2^{13}$ to $2^{10}$. We show results for the image datasets in Figure 4 and for the recommender system in Figure 3. As shown, the accuracy drops with lower precision. Our results corroborate results in low-precision training (De Sa et al., 2017). Nonetheless, ZkAUDIT-T can achieve near-parity with fp32 training. These results show that our optimizations are necessary for high performance in ZkAUDIT-T. 6 Using ZkAUDIT for Audits In addition to evaluating the ZkAUDIT training procedure, we describe and evaluate end-to-end audits and costs. In principle, ZkAUDIT is capable of performing any computable function, but we focus on audits of broader interest. For example, if the audit simply reveals the training data (which is a computable function), the model provider may choose to not participate. Our examples are meant as proof of concepts, and must be combined with work from the transparency literature for a full end-to-end solution. Our work focuses on the technical feasibility of privacy-preserving proofs of training and computability of audits. We describe how to perform an end-to-end audit in Appendix F. Consider the case of recommender systems. Consumers of ML systems are interested in a wide range of audits. They may be interested in checking if certain items (e.g., Tweets or products) are censored (Pesce, 2023). A more extensive audit may also attempt to understand counterfactual behavior, in which the inputs to the recommender system are changed, and the effects on the recommendations are measured (Akpinar et al., 2023). Outside of recommender systems, a copyright holder may wish to check that a model provider did not use their work in training or an auditor may perform a demographic disparity check. These audits require model outputs and a similarity check. Each audit... requires executing a different function, and as a result has a different cost profile, which we describe for each audit. We explore these audits below. **Censorship audit.** To perform the censorship audit, we are interested if an item \( x \) is ranked lower than the value implied by the recommender system. In particular, we are interested in whether an item the user believes should have a high ranking is censored. We can use random sampling to determine the quantile of the item \( x \) among the full set of items or a subset of items previously shown to the user (ZKAUDIT-I). Determining the quantile is equivalent to estimating the parameter of a Bernoulli random variable and has rate \( O(1/\sqrt{N}) \). We use the Hoeffding bound to achieve finite sample estimates. We executed this audit on the movielens dataset for estimating the quantile within 5% and 1% (600 and 14,979 samples, respectively). The true difference in quantile was 1.1% and 0.1%, respectively; the Hoeffding bound is known to be loose in practice [Lee (2020)]. The costs were $0.42 and $10.59 for 5% and 1%, respectively, which are well within reason for many circumstances. **Counterfactual audit.** The most comprehensive counterfactual audit measures the impact of interventions on recommender systems [Akpinar et al. (2023)]. These interventions can be replacing inputs or changing hyperparameters. In order to do this audit, we can perform training twice and then estimate quantities. The total cost is twice the cost of training and the cost of estimating a quantity. We performed the audit on the movielens dataset. We used a scale factor of \( 2^{13} \), which achieves parity with fp32 accuracy (see above). The total cost was $8,456. To contextualize this result, the average cost of a financial audit of an S&P 500 is $13,000,000 [Analytics (2022)]. The full counterfactual audit would be only 0.07% of the cost of a financial audit. **Copyright audit, demographic disparity.** In the copyright audit, we prove with ZK-SNARKs that extracted features for each item (e.g., image) in the training set is dissimilar to features from a copyright holder’s item. For the demographic disparity audit, we computed the demographic of each item and computed summary statistics. Both audits (from the perspective of ZK-SNARK computations) are the same cost. We performed these audit on the flowers-102 dataset. The total cost of the audit was $108 (or about 10 cents per image), showing the feasibility of audits. ## 7 RELATED WORK **Secure ML.** Recent work in the ML and cryptography literature has focused on the secure ML paradigm [Chodsi et al. (2017); Monasseri & Zhang (2017); Knott et al. (2021)]. Much of this work focuses on secure inference, in which a model consumer offloads computation to a service provider. The model consumer desires either privacy or validity. The techniques for secure ML range from multi-party computation (MPC) [Knott et al. (2021); Kumar et al. (2020); Lam et al. (2022)], zero-knowledge or interactive proofs [Lee et al. (2020); Weng et al. (2022); Feng et al. (2021); Kang et al. (2022)], and fully homomorphic encryption (FHE) [Lou & Jiang (2021); Juvekar et al. (2018)]. In this work, we focus on training as opposed to inference, with malicious adversaries. We provide the first fully-private training scheme for realistic datasets in the face of malicious adversaries. **ZK-SNARKs.** Recent work has optimized ZK-SNARKs for DNNs [Lee et al. (2020); Weng et al. (2022); Feng et al. (2021); Kang et al. (2022)] and numerical optimization problems [Angel et al. (2022)]. None of the prior work demonstrates how to perform the softmax and backward pass, both of which are required for training. In this work, we leverage ideas from inference to optimize the forward pass but show how to compute full SGD in ZK-SNARKs and optimize the softmax. ## 8 CONCLUSION In this work, we have shown the feasibility of trustless audits for image classification and simple recommender systems. These audits include censorship to copyright audits. Although promising, much work remains to scale ZKAUDIT to larger models and datasets. We hope that ZKAUDIT can serve as inspiration for further research in audits, given the rise of API-gated models. REFERENCES Nil-Jana Akpinar, Liu Leqi, Dylan Hadfield-Menell, and Zachary Lipton. Counterfactual metrics for auditing black-box recommender systems for ethical concerns. 2023. Audit Analytics. Audit and total fee trends of the s&sp 500. 2022. URL https://blog.auditanalytics.com/audit-total-fee-trends-of-the-sp-500/ Sebastian Angel, Andrew J Blumberg, Eleftherios Ioannidis, and Jess Woods. Efficient representation of numerical optimization problems for {SNARKs}. In 31st USENIX Security Symposium (USENIX Security 22), pp. 4273–4290, 2022. Arasu Arun, Srinath Setty, and Justin Thaler. Jolt: Snarks for virtual machines via lookups. Cryptology ePrint Archive, 2023. Shahla Atapoor and Karim Baghery. Simulation extractability in groth’s zk-snark. In Data Privacy Management, Cryptocurrencies and Blockchain Technology: ESORICS 2019 International Workshops, DPM 2019 and CBT 2019, Luxembourg, September 26–27, 2019, Proceedings, pp. 336–354. Springer, 2019. Karissa Bell. What did twitter’s ‘open source’ algorithm actually reveal? not a lot. Engadget, 2023. URL https://www.engadget.com/what-did-twitters-open-source-algorithm-actually-reveal-not-a-lot.html Mihir Bellare and Phillip Rogaway. Random oracles are practical: A paradigm for designing efficient protocols. In Proceedings of the 1st ACM Conference on Computer and Communications Security, pp. 62–73, 1993. Nir Bitansky, Ran Canetti, Alessandro Chiesa, Shafi Goldwasser, Huijia Lin, Aviad Rubinstein, and Eran Tromer. The hunting of the snark. Journal of Cryptology, 30(4):989–1066, 2017. Benedikt Bünz, Mary Maller, Pratyush Mishra, Nirvan Tyagi, and Psi Vesely. Proofs for inner pairing products and applications. In Advances in Cryptology–ASIACRYPT 2021: 27th International Conference on the Theory and Application of Cryptology and Information Security, Singapore, December 6–10, 2021, Proceedings, Part III 27, pp. 65–97. Springer, 2021. Christopher De Sa, Matthew Feldman, Christopher Ré, and Kunle Olukotun. Understanding and optimizing asynchronous low-precision stochastic gradient descent. In Proceedings of the 44th annual international symposium on computer architecture, pp. 561–574, 2017. Boyuan Feng, Lianke Qin, Zhenfei Zhang, Yufei Ding, and Shumo Chu. Zen: An optimizing compiler for verifiable, zero-knowledge neural network inferences. Cryptology ePrint Archive, 2021. Ariel Gabizon, Zachary J Williamson, and Oana Ciobotaru. Plonk: Permutations over lagrange-bases for oecumenical noninteractive arguments of knowledge. Cryptology ePrint Archive, 2019. Zahra Ghodsi, Tianyu Gu, and Siddharth Garg. Safetynets: Verifiable execution of deep neural networks on an untrusted cloud. Advances in Neural Information Processing Systems, 30, 2017. Shafi Goldwasser, Michael P Kim, Vinod Vaikuntanathan, and Or Zamir. Planting undetectable backdoors in machine learning models. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pp. 931–942. IEEE, 2022. Jens Groth. On the size of pairing-based non-interactive arguments. In Annual international conference on the theory and applications of cryptographic techniques, pp. 305–326. Springer, 2016. F Maxwell Harper and Joseph A Konstan. The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis), 5(4);1–19, 2015. Chiraag Juvekar, Vinod Vaikuntanathan, and Anantha Chandrakasan. {GAZELLE}: A low latency framework for secure neural network inference. In 27th USENIX Security Symposium (USENIX Security 18), pp. 1651–1669, 2018.
ePOjNlOjLC
According to Table 1, the proposed method is inferior to SD inpainting on both performance and efficiency. The only superiority of the proposed method is training free. However, since it needs cyclical diffusion & denoising, its inference cost is higher than SD inpainting. The superiority may be weakened.
Diffusion in Diffusion: Cyclic One-Way Diffusion for Text-Vision-Conditioned Generation Ruoyu Wang\textsuperscript{1}\textsuperscript{*}, Yongqi Yang\textsuperscript{1}\textsuperscript{*}, Zhihao Qian\textsuperscript{1}, Ye Zhu\textsuperscript{2}, Yu Wu\textsuperscript{1}\textsuperscript{†} \textsuperscript{1} School of Computer Science, Wuhan University \textsuperscript{2} Department of Computer Science, Princeton University \{wangruoyu, yongqiyang, qianzhihao, wuyucs\}@whu.edu.cn yezhu@princeton.edu Abstract Originating from the diffusion phenomenon in physics that describes particle movement, the diffusion generative models inherit the characteristics of stochastic random walk in the data space along the denoising trajectory. However, the intrinsic mutual interference among image regions contradicts the need for practical downstream application scenarios where the preservation of low-level pixel information from given conditioning is desired (e.g., customization tasks like personalized generation and inpainting based on a user-provided single image). In this work, we investigate the diffusion (physics) in diffusion (machine learning) properties and propose our Cyclic One-Way Diffusion (COW) method to control the direction of diffusion phenomenon given a pre-trained frozen diffusion model for versatile customization application scenarios, where the low-level pixel information from the conditioning needs to be preserved. Notably, unlike most current methods that incorporate additional conditions by fine-tuning the base text-to-image diffusion model or learning auxiliary networks, our method provides a novel perspective to understand the task needs and is applicable to a wider range of customization scenarios in a learning-free manner. Extensive experiment results show that our proposed COW can achieve more flexible customization based on strict visual conditions in different application settings. Project page: https://wangruoyu02.github.io/cow.github.io/ 1 Introduction In physics, the diffusion phenomenon describes the movement of particles from an area of higher concentration to a lower concentration area till an equilibrium is reached (Philibert, 2006). It represents a stochastic random walk of molecules to explore the space, from which originates the state-of-the-art diffusion generative models (Sohl-Dickstein et al., 2015). Compared to the physical diffusion process, it is widely acknowledged that the diffusion generative model in machine learning also stimulates a random walk in the data space (Song et al., 2020b), however, it is less obvious how the diffusion models stimulate the information diffusion for real-world data along its walking trajectory. In this work, we start by investigating the diffusion phenomenon in diffusion models for image synthesis, namely “diffusion in diffusion”, during which the pixels within a single image from different data distributions exchange and interact with each other, ultimately achieving a harmonious state in the data space (see Sec. [3.2]). The diffusion phenomenon is intrinsically stochastic, both in physics and in current machine learning frameworks, given no explicit constraints on the regional concentrations. In other words, regions within an image interfere with each other along the generation process as the sample goes from a noisy Gaussian space to the data space. However, we note that this property of bidirectional interference is not always desired when applying diffusion generative models in practical downstream tasks. For instance, tasks like image inpainting can be viewed as strictly unidirectional information diffusion, propagating information from a known portion (the existing image portion) \textsuperscript{*}Equal contribution. \textsuperscript{†}Corresponding author. Figure 1: Comparison with existing SOTA methods for maintaining the fidelity of text and visual conditions in different application scenarios. We consistently achieve superior fidelity to both text and visual conditions across all three settings. In contrast, other learning-based approaches struggle to attain the same level of performance across diverse scenarios. to an unknown portion (the missing image portion) while preserving the pixel-level integrity of the known portion. Most existing methods (Ruiz et al., 2022; Gal et al., 2022; Dong et al., 2022; Zhang & Agrawala, 2023) tackle the problem by brute-force learning, where they incorporate the visual condition into pre-trained text2image models by introducing an additional finetuning process to minimize the reconstruction error. However, these tuning-based methods, despite explicitly minimizing reconstruction errors, can not always achieve pleasant fidelity to both the text and the visual conditions, especially when faced with contradictions between conditions like style transfer and mismatched attributes scenarios, as shown in Fig. 1. In addition, they introduce additional learning costs dependent on the pre-trained model and hinder the original distribution modeling ability of the base model. To this end, the ability to control the direction of information diffusion opens up the potential for a new branch of methodological paradigms to achieve versatile customization applications without the need to change the parameters of existing pre-trained diffusion models or learn any auxiliary neural networks. Following our analysis on the diffusion in diffusion and its connection to practical task needs, we propose Cyclic One-Way Diffusion (COW), a training-free framework that achieves unidirectional diffusion for versatile customization scenarios, ranging from conventional visual-conditioned inpainting to visual-text-conditioned style transformation. From the methodological point of view, we re-inject the semantics (inverted latents) into the generation process and repeatedly “disturb” and “reconstruct” the image in a cyclic way to maximize and encourage information flow from the visual condition to the whole image. From the application point of view, the powerful knowledge of the pre-trained diffusion model empowers us to conduct meaningful editing or stylizing operations while maintaining the fidelity of the visual condition. As a result, even in its unidirectional design, COW can achieve more flexible customization based on strict visual conditions. Extensive experiments and human studies, involving 3,000 responses for 600 image groups, demonstrate that COW consistently outperforms its counterparts in terms of condition consistency and overall fidelity. Besides, COW generates an image in just 6 seconds, far faster than other customization methods like DreamBooth (Ruiz et al., 2022), which takes 732 seconds. 2 RELATED WORK Diffusion Models. The recent state-of-the-art diffusion generative methods (Song et al., 2020a; Dhariwal & Nichol, 2021; Nichol & Dhariwal, 2021; Song et al., 2020b; Hoogeboom et al., 2021; Wu et al., 2022; Zhu et al., 2023b) originate from the non-equilibrium statistical physics (Sohl- which simulate the physical diffusion process by slowly destroying the data structure through an iterative forward diffusion process and restore it by a reverse annealed sampling process. DDPM (Ho et al., 2020) shows the connection between stochastic gradient Langevin dynamics and denoising diffusion probabilistic models. DDIM (Song et al., 2020a) generalizes the Markovian diffusion process to a non-Markovian diffusion process, and rewrites DDPM in an ODE form, introducing a method to inverse raw data into latent space with low information loss. In our work, we utilize DDIM inversion to access the latent space and find an information diffusion phenomenon in the sampling process towards the higher concentration data manifold driven by the pre-trained gradient estimator. **Downstream Customization Generation.** Given a few images of a specific subject, the customization generation task aims to generate new images according to the text descriptions while keeping the subject’s identity unchanged. Early approaches mainly relied on GAN-based architectures (Reed et al., 2016; Zhang et al., 2017; Dong et al., 2017; Zhang et al., 2018; Xu et al., 2018; Lin et al., 2018; Wu et al., 2019; Karras et al., 2019; Li et al., 2019; Karras et al., 2020) for customization generation. In recent years, the diffusion methods under text condition image generation task (T2I) have made a great development (Nichol et al., 2021; Ramesh et al., 2022; Oppenlaender, 2022; Ramesh et al., 2021; Ho & Salimans, 2022; Saharia et al., 2022). However, appointing a specific visual condition at a certain location on the results of a generation remains to be further explored. Choi et al. (2021) is a learning-free visual conditioned method that generates a random new face similar to the given face. There are recent customized methods of learning-based visual-text conditioned generation like Ruiz et al. (2022); Gal et al. (2022); Zhang & Agrawala (2023); Rombach et al. (2022); Dong et al. (2022); Brooks et al. (2023); Wu et al. (2023); Wei et al. (2023); Guo et al. (2023); Choi et al. (2021). Methods like DreamBooth (Ruiz et al., 2022) and Textual Inversion (Gal et al., 2022) learn the concept of the visual condition into a certain word embedding by additional training on the pre-trained T2I model (Kumari et al., 2022; Dong et al., 2022; Ruiz et al., 2023; Chen et al., 2023). It is worth noting that it is still hard for DreamBooth or Textual Inversion (TI) to keep the id-preservation even given 5 images as discussed in Shi et al. (2023), and DreamBooth tends to overfit the limited fine-tuning data and incorrectly entangles object identity and spatial information, as discussed in Li et al. (2023). ControlNet (Zhang & Agrawala, 2023) trains an additional network for a specific kind of visual condition (e.g., canny edges, pose, and segmentation map). More generally, an inpainting task is also a customization generation with a forced strong visual condition. Compared to these methods, our proposed method can explicitly preserve the pixel-level information of the visual conditions while achieving versatile application scenarios like style transfer and attribute editing. ### 3 Method In this section, we present our methodology design for leveraging the diffusion in diffusion phenomenon to achieve versatile downstream applications without additional learning. #### 3.1 Preliminaries Denoising Diffusion Probabilistic Models (DDPMs) (Ho et al., 2020) define a Markov chain to model the stochastic random walk between the noisy Gaussian space and the data space, with the diffusion direction written as, $$q(x_t | x_{t-1}) = \mathcal{N}(\sqrt{1 - \beta_t} x_{t-1}, \beta_t I),$$ where $t$ represents diffusion step, $\{\beta_t\}$ are usually scheduled variance, and $\mathcal{N}$ represents Gaussian distribution. Then, a special property brought by Eq. [1] is that: $$q(x_t | x_{t-k}) = \mathcal{N}(\sqrt{\alpha_t / \alpha_{t-k}} x_{t-k}, \sqrt{1 - \alpha_t / \alpha_{t-k}} I),$$ where $\alpha_t = \prod_{i=0}^{t} (1 - \beta_i)$. So we can bring $x_t$ to any $x_{t+k}$ in a one-step non-Markov way in our proposed cyclic one-way diffusion (Sec. 3.3) by adding certain random Gaussian noise. DDIM (Song et al., 2020a) generalizes DDPM to a non-Markovian diffusion process and connects the sampling process to the neural ODE: $$d\bar{x}(t) = e^{(t)} (\frac{\bar{x}(t)}{\sigma^2 + 1}) d\sigma(t).$$ Figure 2: Illustration of “diffusion in diffusion”. We inverse pictures of pure gray and white back to $x_t$, merge them together with different layouts, and then regenerate them back to $x_0$ via deterministic denoising. Different columns indicate different replacement steps $t$. The resulting images show how regions within an image diffuse and interfere with each other during denoising. By solving this ODE using Euler integration, we can inverse the real image $x_0$ (the visual condition) to $x_t$ in any corresponding latent space $\epsilon_t$ \cite{Zhu et al., 2023a; Mokady et al., 2023; Asperti et al., 2023} while preserving its information. The symbols $\sigma$ and $\bar{x}$ are the reparameterizations of $(\sqrt{1 - \alpha}/\sqrt{\alpha})$ and $(x/\sqrt{\alpha})$ respectively. ### 3.2 Diffusion in Diffusion **Internal Interference in Diffusion Generation.** Diffusion in physics is a phenomenon caused by random movements and collisions between particles. The diffusion model, drawing inspiration from non-equilibrium thermodynamics, establishes a Markov chain between a target data distribution and the Gaussian distribution. Subsequently, it learns to reverse this diffusion process, thereby constructing the desired data samples from the Gaussian noise. This inherently simulates a gradual, evolving process that can be viewed as a random walk through a large number of possible data distributions, which eventually gradually approaches the real data distribution. Therefore, the diffusion models share a similar interference phenomenon as the physical diffusion, characterized by continuous information exchange within the data, ultimately achieving harmonious generation results. We design a toy experiment to reveal this phenomenon more intuitively. To start with, we apply DDIM Inversion to convert gray and white images into various latent codes along the diffusion timeline, spanning from the start ($t = T$) to the final state ($t = 0$). Existing literatures \cite{Song et al., 2020a; Zhu et al., 2023a} demonstrate that those intermediate latent codes can well reconstruct the raw image via deterministic denoising. In other words, both latent codes contain information inherited from their respective raw images, i.e., pure gray and white colors. Consequently, at each selected time step $t$, we merge half of latent codes from the two images into one and denoise the resulting combination. This allowed us to observe how different pieces of information interact throughout the generation process and thus influence the final image. The results in Fig. 2 show that as the merging time step $t$ (at which step we put two latent codes together) approaches $T$ (the Gaussian noise end), the corresponding denoised image $x_0$ exhibits the spatial diffusion phenomenon, resulting in stronger color blending. Conversely, as $t$ approaches 0 (the raw image end), the image showcases robust reconstruction ability, with minimal interference between the two colors. **Varying Interference Intensity in Diffusion Generation.** Based on the observations above and additional supplementary experiments in Appendix A.4, we can roughly divide the denoising process into three stages. Throughout the reverse diffusion process, the model attends to different levels of information at each stage, essentially embodying a progression from extreme noise to semantic formation to refinement. Introducing guidance too early hinders its reflection in the final image due to the uncontrollable interference caused by excessive noise, while introducing it too late does not allow for the desired high-level semantic modifications. The best to inject visual condition information is the middle stage, where the model gains the capacity to comprehend and generate basic semantic content, striking a balance between controllable inner mutual influence and responsiveness to text conditions. Ultimately, proper refinement at the last stage ensures that the generated images exhibit intricate details of visual condition. These insights and observations pave the way for integrating new visual condition control paradigms into pre-trained diffusion models. 3.3 Training-Free Cyclic One-way Diffusion In this subsection, we illustrate how to take advantage of the diffusion phenomenon in the diffusion generation process, to enable effective both pixel-level and semantic-level visual conditioning without training. Our approach involves three main components: Seed Initialization, Cyclic One-Way Diffusion and Visual Condition Preservation, as shown in Fig. 3. Seed Initialization. It is common to bring the thread end close to the needle before threading. Using random initialization sampled from the Gaussian distribution may introduce features or structures that conflict with the given visual condition. For instance, if the user specifies that the object should be located on the left side, but the initial noise tends to generate it on the right side, there will be a conflict. The model must expend considerable effort during the generation process to correct this inconsistency. To address this issue, we propose to introduce user-specified visual conditions at the initial stage by placing them onto a predefined background, typically a semantically neutral pure gray background. The objective is to inject stable high-level semantic information early in the denoising process, effectively reducing the layout conflicts with the visual condition. Cyclic One-Way Diffusion (COW). Theoretically, we invert the visual condition into its latent representation by solving the probabilistic flow ODE (Eq. 3) and embedding it in the initial random Gaussian noise that serves as the starting point, which can provide a good generative prior to maintain consistency with the visual condition. However, the implanted information will be continuously disrupted by inner diffusion from the surrounding Gaussian region at every denoising step. Therefore, we introduced “one-way” and “cyclic” strategies to maximize the flow of information from the visual condition to the whole image and minimize undesired interference from other image regions. To be specific, we store inverted latents of the visual condition at each inversion step in the middle stage (the semantic formation stage), denoted as $x_{t_1}, x_{t_1+1}, \ldots, x_{t_2}$ and gradually embed them in the corresponding timesteps during the generation process. Through this step-wise information injection, we can ensure the unidirectional propagation of information, i.e., it only propagates from the visual condition to the other regions without interference from information in the background or other parts of the image. Given the limited generative capacity of the model at each step, noise is injected to regress the generative process to earlier stages, as illustrated in Equation 1. This cyclic utilization of the model’s generative capacity enables the continuous perturbation of inconsistent semantic information, facilitating the re-diffusion of conditional guidance in subsequent rounds. The Cyclic One-Way Diffusion approach benefits the model from a one-way guidance from the visual condition, creates additional space by cycles for semantic “disturb” and “reconstruct”, and ultimately achieves harmony among the background, visual condition, and text condition. **Visual Condition Preservation.** Conflicts between the visual and the text conditions often exist (such as a smiling face condition and a “sad” text prompt), necessitating a method that can effectively balance these conditions. We observe that the middle stage is still subject to some extent of uncertainty, which in turn leaves enough space for controlling the text condition guidance on the generation of the visual condition region. Meanwhile, in the later stage, the model focuses on refining high-frequency details and textures to enhance image quality while maintaining global structure integrity. Thus we explicitly control the degree of visual condition preservation by replacing the corresponding region at an adjustable step $x_{t_0}$ in the early phase of this later stage. This approach can effectively preserve fidelity to both the visual and the text conditions, achieving harmonious results of style transfer and attribute editing without additional training. ## 4 EXPERIMENTS We show the versatility and superiority of COW by comparisons with four SOTA baselines under three different task settings. Also, we conduct an exhaustive ablation study to demonstrate the effectiveness of COW utilizing the diffusion phenomenon in machine learning diffusion generation. ### 4.1 EXPERIMENTS SETUP **Benchmark.** To simulate the visual condition processing in real scenarios, we adopt face image masks from CelebAMask-HQ ([Lee et al., 2020](https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth)) as our visual condition. For the text conditions, we design three kinds of settings: normal prompt, style transfer, and attribute editing. Then we combine them as the dual conditions for our TV2I task. **Implementation Details.** We implement COW, SD inpainting, and ControlNet on pre-trained T2I Stable Diffusion model ([Rombach et al., 2022](https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth)) sd-v2-1-base, with default configuration condition scale set 7.5, noise level $\eta$ set 1, image size set 512, 50 steps generation (10 steps for COW), and negative prompt set to “a bad quality and low-resolution image, extra fingers, deformed hands”. Note that we implement DB and TI following their official code and use the highest supported Stable Diffusion version sd-v1-5. During generation, we set the visual condition size to 256 and randomly chose a place above half of the image to add the visual condition. We choose $x_{t_1}$ to step 25, $x_{t_2}$ to 35, cycle number to 10. We use slightly different settings for the three different tasks. We set $x_{t_3}$ to be [4, 3], eta to be 0 in the normal prompts, $x_{t_4}$ to be 4, eta to be 0.1 in the attribute editing prompts, and $x_{t_5}$ to be 4, eta to be 1 in the style prompts. We perform all comparisons with baselines on 200 images and 300 texts (100 texts for every setting, 1 text to 2 faces in order). We use a single NVIDIA 4090 GPU to run experiments since our proposed COW method is training-free. **Comparisons.** We perform a comparison with four SOTA works incorporating different levels of the visual condition into pre-trained T2I models: DreamBooth ([Ruiz et al., 2022](https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth)) based on the code\(^1\), TI ([Gal et al., 2022](https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth)), SD inpainting ([Rombach et al., 2022](https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth)), and Controlnet on the canny edge condition ([Zhang & Agrawala, 2023](https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth)). DreamBooth ([Ruiz et al., 2022](https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth)) introduces a rare-token identifier along with a class-prior for a more specific few-shot visual concept by finetuning pre-trained T2I model to obtain the ability to generate specific objects in results. TI ([Gal et al., 2022](https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth)) proposes to convert visual concept to word embedding space by training a new word embedding using a given visual condition, and uses it directly in the generation of results of specific objects. ControlNet ([Zhang & Agrawala, 2023](https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth)) incorporates additional visual conditions (e.g., canny edges) by finetuning a pre-trained Stable Diffusion model in a relatively small dataset (less than 50k) in an end-to-end way. SD inpainting ([Rombach et al., 2022](https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth)) preserves the exact pixel of the visual condition when generating an image, and it is the inpainting mode of the pre-trained Stable Diffusion. **Evaluations.** To evaluate the quality of the generated results for this new task, we consider two aspects of assessment: visual fidelity and text fidelity. Following the same evaluation metrics as in previous works ([Gal et al., 2022](https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth), [Ruiz et al., 2022](https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth), [Dong et al., 2022](https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth)), we utilize pretrained CLIP (ViT-B/32) to evaluate the average pairwise cosine similarity between CLIP embeddings of generated images and the prompts, denoted as $CLIP-T$. Additionally, we adopt a face detection... model (MTCNN \cite{zhang2016joint}) to detect the presence of the face, and a face recognition model (FaceNet \cite{schroff2015facenet}) to obtain the face feature and thus calculate the face feature distance between the generated face region and the given visual condition. The Face Detection Rate and ID-Distance reflect the fidelity of the visual face condition for those generative models. However, relying solely on model predictions can not fully capture the subtle differences between images and can not reflect the overall quality of the images (e.g., realism, richness), which are critical crucial for human perception. Therefore, we further evaluate our model via human evaluation. We design two base criteria and invite 50 participants to be involved in this human evaluation. The two criteria are: 1. Condition Consistency: whether the generated image well matches the visual and textual conditions; 2. General fidelity: whether the chosen image looks more like a real image in terms of image richness, face naturalness, and overall image fidelity. It’s important to note that when assessing the latter criterion, participants are not provided with textual and visual conditions to prevent additional information from potentially interfering with the assessment process. ### 4.2 Experimental Results COW enables versatile applications under the setting of TV2I tasks, including inpainting, attribute editing, and style transfer. More comparison of generated images are included in Appendix A.1.5. **Quantitative Results.** Quantitative results in Tab. 1 show that our method almost perfectly retains faces (the given visual condition) in the synthesized images, and achieves second-best text conditional fidelity in an efficient time cost. These qualitative results prove that our method generally outperforms previous works in terms of fidelity to visual and text conditions. When processing semantically complex visual conditions, such as faces, by explicitly considering and incorporating low-level visual details, our approach can motivate the model to generate results that are highly consistent with the original conditions. In addition, our method is training-free, thus we can generate the prediction using fewer computations. **Human Evaluations.** We conduct a preference test where participants select their favorite image among a shuffled set, including ours and four compared methods. Each image is evaluated by both --- **Table 1:** Quantitative and qualitative comparison between COW and SOTA methods. | Methodology | Clip T ↑ | ID-Distance ↓ | Face Detection Rate ↑ | Time ↓ | Condition Consistency ↑ | General Fidelity ↑ | |-------------|----------|---------------|-----------------------|--------|------------------------|-------------------| | TI | 0.253 | 1.186 | 70.66% | 3025s | 02.73% | 12.60% | | DreamBooth | 0.329 | 1.361 | 70.50% | 732s | 09.60% | 28.73% | | ControlNet | 0.305 | 1.194 | 45.66% | 4s | 11.60% | 04.73% | | SD inpainting | 0.300 | 0.408 | 100.00% | 5s | 06.33% | 02.07% | | COW (ours) | 0.306 | 0.901 | 100.00% | 6s | 69.73% | 51.87% | --- Figure 4: The adaptation of the visual condition to align with the text condition while maintaining the semantic and pixel-level information of the visual condition. In each pair of images, the smaller image is the given visual condition and the other is the generated result. The bolded parts of the text conditions highlight the conflicts between conditions. Figure 5: Analysis of the cycling process that diffuses “visual seed” to its surroundings. The left-most figure shows a given face condition. The right shows the images generated with given text conditions. The cycle number increases from the left to the right. criteria and five different participants, resulting in a total of 3,000 responses for 600 image groups in Tab.1. Our method consistently outperforms the others across all three settings, as detailed in Appendix A.3. These results demonstrate that our method better integrates text and visual conditions while preserving a satisfying image fidelity. One-Way Diffusion during Cycling Process. To emphasize the role of cyclic one-way diffusion strategy during the generation process, we set the noise level $\eta$ to 0 to slow down the information diffusion. As shown in Fig.5, as the cycle proceeds, the background pixels are gradually matched to the text condition and fused with the given face condition. This vividly demonstrates that the information in the visual condition keeps spreading and diffusing to the surrounding region along with cycles. In addition, the model is capable of understanding the semantics of the implanted visual conditions properly. Additionally, we conduct a comprehensive ablation study in Appendix A.2 to validate the effectiveness of COW and explore optimal hyperparameter configurations. Trade-offs between Conditions for Different Modalities. The TV2I generation task faces many challenges due to the semantic gap between text and images and the complexity of multimodal data. In general, textual information provides a high-level description of the image to be generated, such as the type, color, and other attributes of the object, while visual information contains more low-level details, such as shape, texture, and so on. The model needs to be able to understand the semantic information in the textual description and translate this information into visually specific details. COW strikes a balance between meeting the visual and text conditions as shown in Fig.4. For example, when given a photo of a young woman but the text is an old person, our method can make the woman older to meet the text description by adding wrinkles, changing skin elasticity and hair color, etc., while maintaining the facial expression and the identity of the given woman. Additionally, in Fig.6 we showcase a series of samples containing varying degrees of changes of visual conditions in the generated output: (1) almost unchanged, (2) slightly perturbed (e.g., adding accessories), (3) attribute editing (e.g., from smiling to angry), (4) style transfer (e.g., from a photo to a comic picture), and (5) cross-domain transformation (e.g., from a human face to a lion). In particular, even in cases involving significant conflicts, such as transitioning between different species, our method can adeptly preserve certain individual characteristics of the given individual while seamlessly integrating them with the attributes of the target species as shown in Fig.8. These results demonstrate the ability of COW to effectively understand and balance the information of different modalities and adaptively adjust to produce high-quality images under a wide range of conditions, showcasing its versatility and effectiveness in handling diverse customization scenarios. More Applications. We directly apply COW to other applications to demonstrate its generalization ability. As shown in Fig.7(a), we present results involving a generalized visual condition (cats) paired with different text conditions. Our method harmoniously grows a whole body out of the visual condition under the guidance of the text condition. Furthermore, we explore extreme cases where the visual condition size equals the whole image size as illustrated in Fig.7(b). The results indicate that our method can still produce pleasant outcomes and maintain its style transfer and attribute editing capabilities even in the context of whole-image generation. Additionally, we include results under multiple visual conditions (e.g., two faces) in Appendix A.1.2. Figure 6: Different degrees of changes in visual conditions. The small image on the left shows the given visual condition and the corresponding generated result is on the right. Figure 7: More applications: (a) generalized condition and (b) whole image generation/editing. The small images represent the visual conditions, while the texts serve as prompts. 5 CONCLUSION AND DISCUSSION In this paper, we investigate the diffusion (physics) in diffusion (machine learning) properties and propose our Cyclic One-Way Diffusion (COW) method to strict the bidirectional diffusion into a unidirectional diffusion from the given visual condition via a pre-trained frozen diffusion model, fertilizing a versatility of customization application scenarios. Our method novelty explores the potential of utilizing the intrinsic diffusion property for specific task needs. All the experiments and evaluations demonstrate our method can generate images with high fidelity to both semantic-text and pixel-visual conditions in a training-free, efficient, and effective manner. Limitations. The pre-trained diffusion model is sometimes not robust enough to handle extremely strong conflicts between the visual and the text conditions. For example, when given the text “a profile of a person”, and a front face condition, it is very hard to generate a harmonious result that fits both conditions. In this case, the model would follow the guidance of the text generally. Social Impact. Image generation and manipulation have been greatly used in art, entertainment, aesthetics, and other common use cases in people’s daily lives. However, it can be abused in telling lies for harassment, distortion, and other malicious behavior. Too much abuse of generated images will decrease the credibility of the image. Our work doesn’t surpass the capabilities of professional image editors, which adds to the ease of use of this proposed process. Since our model fully builds on the pre-trained T2I model, all the fake detection works distinguishing the authenticity of the image should be able to be directly applied to our results. ACKNOWLEDGEMENT This work was partially supported by the National Natural Science Foundation of China under Grant 62372341. This work is jointly advised by Dr. Ye Zhu and Dr. Yu Wu. Also, Ye and Yu appreciate the support from their postdoc advisor, Dr. Olga Russakovsky from Princeton University, for her help in their early career development stage. REFERENCES Andrea Asperti, Gabriele Colasuonno, and Antonio Guerra. Head rotation in denoising diffusion models. *arXiv preprint arXiv:2308.06057*, 2023. Omri Avrahami, Kfir Aberman, Ohad Fried, Daniel Cohen-Or, and Dani Lischinski. Break-a-scene: Extracting multiple concepts from a single image. In *SIGGRAPH Asia 2023 Conference Papers*, pp. 1–12, 2023. Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 18392–18402, 2023. Hong Chen, Yipeng Zhang, Xin Wang, Xuguang Duan, Yuwei Zhou, and Wenwu Zhu. Disenbooth: Disentangled parameter-efficient tuning for subject-driven text-to-image generation. *arXiv preprint arXiv:2305.03374*, 2023. Jooyoung Choi, Sungwon Kim, Yonghyun Jeong, Youngjune Gwon, and Sungroh Yoon. Ilvr: Conditioning method for denoising diffusion probabilistic models. *arXiv preprint arXiv:2108.02938*, 2021. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. *NeurIPS*, 34:8780–8794, 2021. Hao Dong, Simiao Yu, Chao Wu, and Yike Guo. Semantic image synthesis via adversarial learning. In *ICCV*, pp. 5706–5714, 2017. Ziyi Dong, Pengxu Wei, and Liang Lin. Dreamartist: Towards controllable one-shot text-to-image generation via contrastive prompt-tuning. *arXiv preprint arXiv:2211.11337*, 2022. Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. An image is worth one word: Personalizing text-to-image generation using textual inversion. *arXiv preprint arXiv:2208.01618*, 2022. Yuchao Gu, Xintao Wang, Jay Zhangjie Wu, Yujun Shi, Yunpeng Chen, Zihan Fan, Wuyou Xiao, Rui Zhao, Shuning Chang, Weijia Wu, et al. Mix-of-show: Decentralized low-rank adaptation for multi-concept customization of diffusion models. *Advances in Neural Information Processing Systems*, 36, 2024. Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, and Bo Dai. Animatediff: Animate your personalized text-to-image diffusion models without specific tuning. *arXiv preprint arXiv:2307.04725*, 2023. Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. *arXiv preprint arXiv:2207.12598*, 2022. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. *NeurIPS*, 33: 6840–6851, 2020. Emiel Hoogeboom, Alexey A Gritsenko, Jasmijn Bastings, Ben Poole, Rianne van den Berg, and Tim Salimans. Autoregressive diffusion models. *arXiv preprint arXiv:2110.02037*, 2021. Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In *CVPR*, pp. 4401–4410, 2019.
2DbVeuoa6a
The operators $T$ and $T^{-1}$ are not detailed. What is their complexity with respect to the input length ? How do you choose the collocation points ? Does the number depend on the difficulty level of the PDE ?
Neural Spectral Methods: Self-supervised learning in the spectral domain Yiheng Du, Nithin Chalapathi, Aditi S. Krishnapriyan {yihengdu, nithinc, aditik1}@berkeley.edu University of California, Berkeley Abstract We present Neural Spectral Methods, a technique to solve parametric Partial Differential Equations (PDEs), grounded in classical spectral methods. Our method uses orthogonal bases to learn PDE solutions as mappings between spectral coefficients, instantiating a spectral-based neural operator. In contrast to current machine learning approaches which enforce PDE constraints by minimizing the numerical quadrature of the residuals in the spatiotemporal domain, we leverage Parseval’s identity and introduce a new training strategy through a spectral loss. Our spectral loss enables more efficient differentiation through the neural network, and substantially reduces training complexity. At inference time, the computational cost of our method remains constant, regardless of the spatiotemporal resolution of the domain. Our experimental results demonstrate that our method significantly outperforms previous machine learning approaches in terms of speed and accuracy by one to two orders of magnitude on multiple different problems, including reaction-diffusion, and forced and unforced Navier-Stokes equations. When compared to numerical solvers of the same accuracy, our method demonstrates a $10\times$ increase in performance speed. Our source code is publicly available at https://github.com/ASK-Berkeley/Neural-Spectral-Methods. 1 Introduction Partial differential equations (PDEs) are fundamental for describing complex systems like turbulent flow (Temam 2001), diffusive processes (Friedman 2008), and thermodynamics (Van Kampen 1992). Due to their complexity, these systems frequently lack closed-form analytical solutions, prompting the use of numerical methods. These numerical techniques discretize the spatiotemporal domain of interest and solve a set of discrete equations to approximate the system’s behavior. Spectral methods are one such class of numerical techniques, and are widely recognized for their effectiveness (Boyd 2001; Gottlieb & Orszag 1977). These methods approximate PDE solutions as a sum of basis functions and transform the equations into the spectral domain. Spectral methods are known for their fast convergence and computational efficiency, especially for problems with smooth solutions. They are notably impactful in fields like computational fluid dynamics (Peyret 2002). Numerical methods can be computationally expensive because a fine discretization of the physical domain and a large number of time-stepping iterations are often required to achieve high accuracy. Additionally, many engineering applications require solving systems under different parameters or initial conditions, necessitating multiple iterations to repeatedly solve such systems. The aforementioned spectral methods also rely on time-stepping schemes, and present similar challenges to other numerical methods. These factors underscore the need for more efficient computational strategies. Recent advances in machine learning (ML) highlight neural networks (NNs) as potential alternatives or enhancements to traditional numerical solvers. Consider PDEs on a regular domain $\Omega \subseteq \mathbb{R}^d$: \[ \begin{align*} F_\phi(u(x)) &= 0 \quad \text{in } \Omega, \\ B_\phi(u(x)) &= 0 \quad \text{on } \partial \Omega, \end{align*} \] where $u$ is the classical solution, $F_\phi$ is the potentially nonlinear differential operator, and $B_\phi$ are the boundary condition(s). Both operators are parameterized by $\phi$, which could correspond to initial conditions or parameters associated with $\Omega$, such as diffusion coefficients. A common ML approach is to train NNs on datasets of numerical solutions spanning various parameters, and then learn the mapping \( G_\phi : \phi \mapsto u_\theta \) from parameters to solutions. This approach is typically done via supervised learning, where the NN is trained by minimizing the error between the predicted solution and the numerical solution. At inference time, the goal is to generalize to previously unseen parameters and directly predict solutions. By amortizing computational costs during training and having rapid inference, these data-driven approaches have the potential to be more efficient than traditional numerical solvers. When there is information about the governing physical laws, another common ML approach is to train the NN through a loss function that imposes a soft constraint on these laws (Raissi et al., 2019). In this case, the NN predicts a solution \( u_\theta \), and then the corresponding residual function, \( R(x) = F_\phi(u_\theta(x)) \), is minimized on a set of sampled spatial and/or temporal points. Specifically, an additional loss term is defined as the numerical quadrature to the residual norm: \[ L_{\text{PINN}}(u_\theta) := \frac{1}{N} \sum_{n \in [N]} R(x_n)^2 \approx ||R(x)||^2_{L^2(\Omega)}, \quad x_n \sim \text{i.i.d. } U(\Omega). \] This is often called a Physics-Informed Neural Network (PINN) loss function, which we denote as \( L_{\text{PINN}}(u_\theta) \). In practice, this approach does not require solution data on the interior of the domain, and the NN can be trained by only satisfying the PDE(s) of interest. This loss function can also be combined with a data fitting loss and trained together. However, this can often be impractical, as it requires knowing both the underlying governing physical laws and having solution data (either through a numerical solver or through observational measurements). Most current ML methods to solve PDEs (Kochkov et al., 2021; Li et al., 2020) are grid-based approaches. To model parametric solutions, these models create a mesh, and perform the parameters-to-solutions mapping through non-local transformations on function values at points in the mesh. One such example are neural operators (Kovachki et al., 2021), which parameterize the mapping using iterative kernel integrals, and have been applied to a wide range of engineering problems (Zhang et al., 2022; Kurth et al., 2023). Neural operators can be trained through supervised learning procedures (a loss function that matches solution data), self-supervised methods (such as the aforementioned PINN loss function), and a combination of both. Data-fitting and PINN loss approaches both have several limitations: - **Data availability.** The effectiveness of data-driven methods generally depends on the availability of large datasets consisting of PDE parameters and corresponding solutions. For complex problems, solution data can only be generated from expensive solvers (or through observations), and also includes inherent numerical errors. When solving new systems, new solution data often needs to be generated, which can be time-consuming. - **Optimization.** Empirical evidence has suggested that minimizing the PINN loss often encounters convergence issues, particularly for complex problems, resulting in subpar accuracy. This is likely attributed to the ill-posed nature of the optimization problem that arises from incorporating PDE constraints into the model (Krishnapriyan et al., 2021; Wang et al., 2021[a]). - **Computation cost.** Computing the PINN loss involves evaluating the differential operator \( F \) at sampled points, which requires computing higher-order derivatives of \( u_\theta \). As the complexity of \( u_\theta \) increases, the computation cost of back-propagation scales significantly. Moreover, accurate estimation of the residual norm requires a substantial amount of sampled points to enforce the PDE. For neural operators such as the commonly used Fourier Neural Operator (FNO) (Li et al., 2020), differentiation costs scale quadratically with the number of points. This is due to the use of Fourier transform, and this makes it intractable to take exact derivatives for large numbers of sampled points (see discussions in §B). To address these issues with accuracy and efficiency, our work explores the incorporation of ML with spectral methods. We focus on a data-constrained setting to learn the solution operator of parameterized PDEs, where we assume that we have no solution data on the interior of the spatiotemporal domain and only train our model by minimizing the PDE residual. Given the form of a differential operator \( F_\phi \), the model learns to map the parameter function \( \phi \) to the corresponding solution \( u_\theta \). Our key insights are to learn the solution as a series of orthogonal basis functions, and leverage Parseval’s Identity to obtain a loss function in the spectral domain. While prior approaches minimize the approximated residual function by computing higher-order derivatives on the sampled points, our method exploits properties of the spectral representation to obtain the exact residual via algebraic operations on the prediction. Our contributions are summarized as follows: • We propose Neural Spectral Methods (NSM) to learn PDE solutions in the spectral domain. Our model parameterizes spectral transformations with NNs, and learns by minimizing the norm of the residual function in the spectral domain. Since solution data can be expensive to generate for every new problem, we focus on scenarios with no solution data on the interior of the domain. • We introduce a spectral-based neural operator that can learn transformations in a spectral basis. By operating on fixed collocation points, our proposed spectral-based neural operator avoids aliasing error and avoids scaling the computational cost with grid resolution. • We introduce a spectral loss to train models entirely in the spectral domain. By utilizing the spectral representation of the prediction and Parseval’s identity, the residual norm is computed by exact operations on the spectral coefficients. This approach avoids sampling a large number of points and the numerical quadrature used by the PINN loss, thereby simplifying computation complexity. • We provide experimental results on three PDEs: Poisson equation §4.1, Reaction-Diffusion equation §4.2, and Navier-Stokes equations §4.3. Our approach consistently achieves a minimum speedup of $100\times$ during training and $500\times$ during inference. It also surpasses the accuracy of grid-based approaches trained with the PINN loss by over $10\times$. When tested on different grid resolutions, our method maintains constant computational cost and solution error. In comparison to iterative numerical solvers that achieve equivalent accuracy, our method is an order of magnitude faster. 2 RELATED WORKS ML methods for solving PDEs. Using NNs to solve PDEs has become an active research focus in scientific computing (Han et al., 2018; Rassi et al., 2019; Lu et al., 2021; He et al., 2018; Mitusch et al., 2021) explore finite element methods in NNs. Yu et al. (2018); Sirignano & Spiliopoulos (2018); Ainsworth & Dong (2021); Bruna et al. (2022) recast PDEs into variational forms and apply the Galerkin projection minimization via sampling-based losses. Sharma & Shankar (2022) accelerate discretized PDE residual computation. Wang et al. (2021b) focus on Fourier features and eigenvalue problems. In the context of spectral methods, Lange et al. (2021) focus on data-fitting; Dresdner et al. (2022) learn to correct numerical solvers; Xia et al. (2023); Luijens et al. (2021) use spectral representations in spatial domains; and Meuris et al. (2023) extract basis functions from trained NNs for downstream tasks. These studies differ in problem settings and deviate from our method in both architecture and training. Neural operators. Neural operators (Kovachki et al., 2021; Li et al., 2020; Gupta et al., 2021) learn mappings between functions, such as PDE parameters or initial conditions to the PDE solutions. They are typically trained in a supervised learning manner on parameter–solution datasets, and aim to learn the functional mappings between them (i.e., the solution operator). However, the supervised learning setting poses a challenge in data generation for new or complex problems, especially when the data is scarce or the numerical solvers generating it are inefficient. One of the most common neural operators is the Fourier Neural Operator (FNO) (Li et al., 2020; Kurth et al., 2023; Zhang et al., 2022). The training process for FNO consists of performing convolutions through Fourier layers and learning in the frequency domain. The Spectral Neural Operator (SNO) (Panaskov & Osleledets, 2022) was proposed to reduce aliasing error in general neural operators by utilizing a feed-forward network to map between spectral coefficients. Similarly, the TransformOnce (T1) (Poli et al., 2022) model looks at learning transformations in the frequency domain with an improved model architecture. However, both models have a number of architecture and training differences from NSM, and as we will show, have much poorer accuracy. They also only look at the supervised learning setting. Physics-Informed Neural Networks (PINNs). The physics-informed neural networks (PINNs) framework (Raisst et al., 2019) adds the governing physical laws (i.e., PDE residual function), estimated on sampled points, into the NN’s loss function. This additional term, which we refer to as a PINN loss (Eq. 2), acts as a soft constraint to regularize the model’s prediction of the PDE solution, and can be considered a self-supervised loss function. This approach can also be used in an operator learning setting across various architectures, where the base architecture is a neural operator and the PINN loss is used to train the model [Li et al., 2021; Tripura et al., 2023; Rosofsky et al., 2023; Goswami et al., 2022]. However, the PINN approach requires evaluating the PDE residual on a large number of sampled points in the interior domain. In scenarios with higher-order derivatives in the PDE residual, multiple differentiations through the NN are required. When using the PINN loss with grid-based neural operators such as FNO, the Fast Fourier Transform (FFT) in the forward pass escalates to quadratic in batched differentiation with respect to the number of sampled points, making exact residual computation through auto-differentiation computationally expensive (see discussions in §B for more details). As we will show, these grid-based methods are inaccurate and often overfit to the training grid size because of aliasing error. 3 NEURAL SPECTRAL METHODS We introduce Neural Spectral Methods (NSM), a general class of techniques to incorporate spectral methods with NNs. NSM learns the mapping from PDE parameters to PDE solutions, i.e., \( G_\theta : \phi \mapsto u_\theta \), and is shown in Fig. 1. NSM consists of two key components: a base NN architecture (Fig. 1a) that maps the spectral representation of the parameters \( \phi \) to that of its solutions \( u_\theta \), and a spectral training procedure (Fig. 1b) that minimizes the spectral norm of the PDE residual function. In §3.2, we describe our core NN architecture, which incorporates spectral methods within a neural operator framework. In §3.3, we introduce our spectral training procedure. Finally, in §4, we demonstrate the strong empirical performance of NSM for learning solutions to different PDE classes. 3.1 BACKGROUND Notation. A set of functions on \( \Omega \) is denoted as \( \{ f_m(x) : x \in \Omega \}_{m \in I} \) where \( I \) is a countable index set. We denote integer indices by \( [n] := \{ 1, 2, \ldots, n \} \). The \( m \)th component of a vector \( x \) is denoted as \( x_n \). Given a set of basis functions, the spectral coefficients of a function \( u \) are denoted as \( \hat{u} \). For an integer \( k > 0 \), the \( k \)th order Sobolev space [Evans, 2022] on domain \( \Omega \subseteq \mathbb{R}^d \) is denoted as \( H^k(\Omega) \). Orthogonal basis. Orthogonal basis functions are fundamental components of spectral methods. For completeness, the definition of orthogonality is provided in §A. The choice of the basis functions is problem-specific and they must have desirable spectral properties. In this paper, we focus on two commonly used orthogonal bases for periodic and other types of boundary conditions, respectively: Example 1 (Fourier basis). \( \{ \sin(mx), \cos(mx) : x \in 2\pi \mathbb{T} \}_{m \in \mathbb{N}} \) w.r.t Lebesgue measure. Example 2 (Chebyshev polynomials). \( \{ T_m(x) : x \in [-1, 1] \}_{m \in \mathbb{N}} \) w.r.t \( \frac{dx}{dx} = 1/\sqrt{1-x^2} \), where \( T_m(x) := \cos(m \cos^{-1}(x)) \equiv (-1)^m \sin(m \sin^{-1}(x)) \). For multi-dimensional problems, we can verify that the product of basis in each dimension preserves completeness and orthogonality (see Proposition 1). Spectral representations are known for their efficiency in representing smooth functions. Specifically, Fourier and Chebyshev interpolations possess the well-known spectral decay for sufficiently smooth functions [Mason & Handscomb, 2002]: Fact 1. For any \( f \in H^p(\mathbb{T}^d) \), its Fourier series coefficients \( |\tilde{f}_m| = O(1/m^p) \). Fact 2. For any \( f \in H^p([-1, 1]^d) \), its Chebyshev expansion \( |\tilde{f}_m| = O(1/m^p) \). Neural operators. Neural operators parameterize the mapping \( G_\theta : \phi \mapsto u_\theta \) as, \[ G_\theta := Q \circ \sigma(W^{(L)} + K^{(L)}) \circ \cdots \circ \sigma(W^{(1)} + K^{(1)}) \circ P, \] where \( \sigma \) is a non-linearity. The operator iteratively updates \( v^{(l)}_\theta : \Omega \rightarrow \mathbb{R}^{d_l} \), where \( d_l \) is the hidden dimension of layer \( l \). The input layer \( v^{(0)}_\theta = P(\phi) \) and output layer \( u_\theta = Q(v^{(L)}_\theta) \) are parameterized by point-wise NNs. The affine operator \( W^{(l)} \) and the kernel integral operator \( K^{(l)} \) are defined as, \[ (W^{(l)}(v))(x) = W^{(l)}v(x) + b^{(l)}, \quad (K^{(l)}(v))(x) = \int_{\Omega} K^{(l)}(x, y)v(y)d\mu(y), \] Figure 1: Schematic of NSM. We refer to Neural Spectral Methods (NSM) as a general approach to learn PDE solution mappings in the spectral domain. Our method consists of two components: a) The parameters \( \phi \) are converted to spectral coefficients, \( \tilde{\phi} \). In each NN layer \( l \), the spectral coefficients \( \tilde{v}_\theta^{(l)} \) are transformed by a linear operator \( \tilde{K} \), with the activation \( \sigma \) then applied on collocation points in the physical space. b) The prediction \( \tilde{u}_\theta \) is transformed by \( \tilde{F}_\phi \), the spectral form of the differential operator, which gives the spectral coefficients \( \tilde{R} \) of the residual function. The exact residual norm is obtained by Parseval’s Identity, giving the spectral loss \( ||\tilde{R}||_2^2 \). We contrast our method against the commonly employed grid-based neural operators with a PINN loss. c) General neural operators learn the PDE solutions as transformations of function values on \( x_i \). We consider the kernel integral in a more general sense, with transformation \( T \) not restricted to a Fourier basis. d) Autograd or finite difference methods are used to obtain the higher-order derivatives. The PINN loss is then obtained by approximating the norm of the residual function on the sampled points. where the input \( v \) is in \( \Omega \rightarrow \mathbb{R}^{d_l-1} \) and the outputs are in \( \mathbb{R}^{d_l} \). The affine operator and kernel integral operator are used to capture local and non-local transformations, respectively. Given input grid \( [x_i] \), general grid-based neural operators parameterize \( G_\theta \) as a mapping between function values: \[ [\phi(x_1) \quad \phi(x_2) \quad \ldots \quad \phi(x_N)] \mapsto [u_\theta(x_1) \quad u_\theta(x_2) \quad \ldots \quad u_\theta(x_N)]. \] 3.2 Spectral-based neural operators In this work, we employ the neural operator architecture to model transformations between spectral coefficients. By fixing \( \{f_m(x) : x \in \Omega\}_{m \in I} \) as the chosen basis functions, the solution operator is parameterized as the mapping between the coefficients of the parameter functions \( \phi \) and the predicted solutions \( u_\theta \). Suppose the series is truncated to \( M \) terms, then \( G_\theta \) is parameterized in the spectral domain as: \[ \phi(x) = \sum_{m \in [M]} \tilde{\phi}_m f_m(x) \mapsto u_\theta(x) = \sum_{m \in [M]} \tilde{u}_\theta,m f_m(x), \] where \( \tilde{\phi} \) and \( \tilde{u}_\theta \) are the spectral expansion of \( \phi \) and \( u_\theta \) under the basis \( f \). For each layer \( l \), the function \( v_\theta^{(l)} \) and kernel \( K^{(l)} \) are also parameterized under the same basis functions, \[ v_\theta^{(l)}(x) = \sum_{m \in [M]} \tilde{v}_\theta^{(l),m} f_m(x), \quad K^{(l)}(x,y) = \sum_{m \in [M]^2} \tilde{K}_m^{(l)} f_m(x)f_m(y), \] where \( \tilde{v}_\theta^{(l)} \in \mathbb{R}^{M \times d_l} \) and \( \tilde{K}^{(l)} \in \mathbb{R}^{M \times M \times d_l \times d_l-1} \) are coefficients in the spectral domain. Due to orthogonality, integral transformations \( K^{(l)} \) are actually equivalent to tensor contractions \( \tilde{v}_\theta^{(l-1),m} \cdot \tilde{K}_m^{(l)} \). Similarly, affine transformations \( W^{(l)} \) are equivalent to \( \tilde{v}_\theta^{(l-1),mi} \cdot W_{ij}^{(l)} + b_{mj}^{(l)} \). Non-linear activation functions. The activation \( \sigma \) is applied on the collocation points. We denote \( T \) as the interpolation of function values at collocation points aligned with the basis, and \( T^{-1} \) as the function value evaluation on those collocation points. Then \( \tilde{\sigma} \), the spectral counterpart of the activation function \( \sigma \), is given by: \[ \tilde{\sigma} = T \circ \sigma \circ T^{-1}. \] Aliasing error. General grid-based neural operators are prone to aliasing error. When trained and tested on different grid resolutions, the interpolation error is inconsistent, leading to an increased error on the test grids that are a different resolution from the training grid. In contrast, our spectral-based approach circumvents aliasing error. By operating exclusively on fixed collocation points, the interpolation error remains consistent across different input resolutions. This also ensures that the model’s computation cost and predictions are resolution-independent. Kernel approximation. Within each layer, the computational cost of the kernel integral is quadratic. To mitigate this cost, the kernel can be confined to a low-rank or simplified form. Here, we employ FNO’s approach, which simplifies the kernel integral into a convolution through a Fourier transformation. More broadly, the kernel is restricted to a diagonal form, \( \tilde{K}^{(l)} \in \mathbb{R}^{M \times d_l \times d_{l-1}} \), and, \[ K^{(l)}(x, y) = \sum_{m \in [M]} \tilde{K}_m^{(l)} f_m(x) f_m(y). \] 3.3 Spectral training After our base NN architecture predicts the solution \( \tilde{u} \) in Eq. 6, we train our model using a spectral training procedure. Here, we describe the details of our spectral training procedure. Spectral form of the residual function. We can derive the exact residual function using the spectral representation of the prediction, denoted by \( \tilde{u} \). The spectral representation has a direct correspondence between operations performed on function values and on spectral coefficients. Additionally, differentiation and integration are transformed to algebraic operations on the coefficients. Given the PDE operator \( F_\phi \), we convert it to its spectral correspondence \( \tilde{F}_\phi : \tilde{u}_\theta \mapsto \tilde{R} \), such that, \[ F_\phi(u_\theta(x)) = \sum_{m \in \mathcal{I}} \tilde{R}_m f_m(x), \] where \( \tilde{R} \) represents the spectral form of the PDE residual function. We describe in more detail how to obtain \( \tilde{F}_\phi \) from \( F_\phi \) for typical nonlinear operators and bases composed of Fourier series and Chebyshev polynomials in §A.1. Spectral loss. After computing the residual function, we aim to minimize our spectral loss, \( ||\tilde{R}||_2^2 \). This method involves projecting the residual function onto a subspace spanned by truncated basis functions, known as the Galerkin projection. The orthogonality of the basis functions is crucial to this procedure. Leveraging Parseval’s Identity, we can equate the spectral loss to the weighted norm of the residual function, as outlined below: **Theorem 1 (Parseval’s Identity).** For \( R(x) = \sum_{m \in \mathcal{I}} \tilde{R}_m f_m(x) \), we have, \[ \int_\Omega R(x)^2 d\mu(x) = \sum_{m \in \mathcal{I}} \tilde{R}_m^2. \] Contrast with the PINN loss. As previously discussed, the PINN loss is obtained by sampling a substantial number of points within the interior domain, followed by employing a numerical quadrature to approximate the integral of \( R(x)^2 \). This requires differentiating through the NN, which can be computationally expensive, especially for higher-order derivatives, and when a large number of points are sampled to ensure accurate quadrature. For the grid-based PINN loss, which commonly uses finite difference methods to approximate the derivatives, we have provided an error analysis in §B.1. From a theoretical perspective, we show that as long as the grid spacing is finite, the expected solution error can be non-zero, even if the grid-based PINN loss is minimized arbitrarily well. We bypass this process by using the spectral representation of the solution, and apply spectral transformations to represent the residual function in the same basis. This greatly simplifies the entire optimization procedure, and as we will demonstrate, significantly reduces training time. Note that even though the spectral loss is a weighted norm, the corresponding PINN loss can also be readily constrained: **Corollary 1.** For \( \frac{d}{d\mu} \in L^\infty(\Omega) \), \( ||R(x)||_{L^2(\Omega)} = O(||\tilde{R}||_2^2) \). This result follows directly by Hölder’s inequality. Both Fourier series and Chebyshev polynomials fulfill this condition, ensuring that minimizing the spectral loss also minimizes the PINN loss. 4 EXPERIMENTAL RESULTS We compare NSM to different neural operators with different loss functions (PINN and spectral losses) on several PDEs: 2D Poisson (§4.1), 1D Reaction-Diffusion (§4.2), and 2D Navier-Stokes (§4.3) with both forced and unforced flow. NSM is consistently the most accurate method, and orders of magnitudes faster during both training and inference, especially on large grid sizes. Problem setting. For all experiments, we focus on the data-constrained setting, using no interior domain solution data during training (i.e., we train only by minimizing the PDE residual). The task is to learn the mapping from PDE parameters $\phi$ to solutions $u$. During training, each model is given $F_\phi$ and the parameters $\phi_i$, which are independently sampled in every training iteration. Recall that the PINN loss (Eq. [12]) and the spectral loss (Eq. [13]) are used on grid-based and spectral-based models, respectively. For the PINN loss, higher-order derivatives are computed using the finite difference method on a fixed rectangular grid $[x_1 \ x_2 \ \ldots \ x_N]$. For our spectral loss, the $M$-term residual series is directly transformed from the predicted solution $\hat{u}_\theta$. \[ \text{PINN Loss} = \frac{1}{|\{\phi_i\}|} \sum_{\phi_i} \frac{1}{N} \sum_{n \in [N]} F_{\phi_i}(u_{\theta,i}(x_n), \nabla u_{\theta,i}(x_n)...)^2 \tag{12} \] \[ \text{Spectral Loss} = \frac{1}{|\{\phi_i\}|} \sum_{\phi_i} \sum_{m \in [M]} \tilde{F}_{\phi_i}(\hat{u}_{\theta,i})_m^2 \tag{13} \] For each problem, the test set consists of $N = 128$ PDE parameters, denoted by $\phi_i$. Each $\phi_i$ is sampled from the same distribution used at training time, and $u_i$ is the corresponding reference solution. For each prediction $u_{\theta,i}$, we evaluate two metrics: $L_2$ relative error $||u_{\theta,i} - u_i||_2/||u_i||_2$ and the PDE residual error $||F_\phi(u_{\theta,i})||_2$. Both metrics are computed on the test set resolution, and averaged over the dataset. Additional details about the experimental setup are provided in §C. We include the following models for comparison: - **FNO + PINN loss.** A grid-based FNO architecture (Li et al., 2020), trained with a PINN loss. The model is trained on different grid sizes, which are indicated by the corresponding labels (e.g., FNO×64² means a grid size of 64 × 64 is used to calculate the PINN loss). - **SNO + Spectral loss.** Ablation model: A base SNO architecture (Panaskov & Oseledets, 2022), trained with our spectral loss. - **T1 + PINN loss / Spectral loss.** Ablation model: A base TransformOnce architecture (Poli et al., 2022), trained with either a PINN loss or our spectral loss. - **CNO + PINN loss (ours).** Ablation model: The base architecture is identical to NSM, but trained with a PINN loss on the largest grid size used by FNO, i.e. 256 on each dimension. - **NSM (ours).** Our proposed spectral-based neural operator using Fourier and Chebyshev basis on periodic and non-periodic dimensions, and it is trained with our spectral loss. For a fair comparison, the base architecture, number of parameters, and other hyperparameters are kept exactly the same across neural operator-based models. Detailed parameters are provided in §C. 4.1 POISSON EQUATION We study the 2D Poisson equation, $-\Delta u(x,y) = s(x,y)$, $x,y \in T$, with periodic boundary conditions. The source term $s$ is sampled from a random field with a length scale of 0.2. The task is to learn the mapping from $s$ to the potential field $u$. We evaluate the predictions using an $L_2$ relative error with respect to the analytical solution. Additional details and results for Dirichlet boundary conditions are in §C.1. Table 1: $L_2$ relative error (%) for the periodic Poisson equation. | Model | Error (%) | |------------------------|-----------| | SNO+Spectral | 0.59 ± 0.12 | | T1×64²+PINN | 3.22 ± 0.49 | | T1+Spectral | 0.302 ± 0.071 | | FNO×64²+PINN | 4.24 ± 0.13 | | FNO×128²+PINN | 2.01 ± 0.03 | | FNO×256²+PINN | 1.75 ± 0.02 | | NSM (ours) | .057 ± .012 | Results. The results are summarized in Tab. 1. Since the solution operator is an inverse Laplacian, all models can theoretically express it with one layer (see discussions in C.1), but the FNO + PINN models exhibit high error, even when trained with large grids and for a longer training time. This simple example highlights the inherent optimization challenges with the grid-based PINN loss. Figure 2: Reaction-Diffusion equation with $\nu = 0.01$. In (a) and (b), the $L_2$ relative error and PDE residual on the test set are plotted over training each model. The grid-based methods (FNO trained with a PINN loss) show improved accuracy as the grid resolution increases, but are significantly slower to train. When tested on different resolutions, significant aliasing errors occur on test grid resolutions that differ from the training grid resolution. In contrast, NSM has a much lower error and PDE residual, and achieves this lower error 100× faster than the grid-based methods. In (c), when compared with iterative numerical solvers on different resolutions, NSM achieves the same level of accuracy with a 10× speedup. Notably, both the accuracy and computational cost of NSM remains constant, regardless of grid resolution. 4.2 Reaction-Diffusion Equation We study the 1D periodic Reaction-Diffusion system with different diffusion coefficients: $$u_t - \nu u_{xx} = \rho u(1-u), \quad x \in \mathbb{T}, \ t \in [0,T],$$ $$u(x,0) = h(x), \quad x \in \mathbb{T},$$ where $\nu$ is the diffusion coefficient, and $h(x)$ is the initial condition sampled from a random field with a length scale of 0.2. Given $h(x)$, the model learns to predict $u(x,t)$ up to time $T = 1$. The initial condition is enforced by transforming the prediction $u(x,t)$ to $u(x,t) \cdot t + h(x)$. Table 2: $L_2$ relative error (%) and computation cost (GFLOP) for the Reaction-Diffusion equation. | $\nu$ | SNO+Spectral | FNO×64²+PINN | FNO×128²+PINN | FNO×256²+PINN | CNO+PINN (ours) | NSM (ours) | |-----------|--------------|--------------|---------------|---------------|----------------|------------| | 0.005 | 4.56 ± 0.99 | 2.30 ± 0.19 | 0.94 ± 0.11 | 0.33 ± 0.04 | 0.20 ± 0.01 | .075 ± .016 | | 0.01 | 5.41 ± 4.43 | 2.64 ± 0.97 | 2.27 ± 1.19 | 0.57 ± 0.19 | 0.48 ± 0.16 | .086 ± .019 | | 0.05 | 87.76 ± 52 | 11.82 ± 5.4 | 3.25 ± 1.29 | 1.06 ± 0.28 | 0.78 ± 0.01 | .083 ± .006 | | 0.1 | 152.8 ± 58 | 13.03 ± 6.4 | 4.90 ± 2.40 | 4.07 ± 2.00 | 1.28 ± 0.42 | .077 ± .005 | Results. The main results are summarized in Tab. 2, including the results for TransformOnce (Poli et al., 2022) in §C.2. The training curves for the PDE residual and relative error on the test set for diffusion coefficient $\nu = 0.01$ are shown in Fig. 2. The grid-based models using the PINN loss improve in relative error with a higher-resolution grid, but require significantly longer training time. In contrast, NSM consistently achieves high accuracy while maintaining a low computation cost. As the diffusion coefficient increases, NSM shows strong robustness and consistently achieves low solution error, while the other models all increase significantly in solution error. We also compare the ML models to a standard numerical solver (Simpson & Landman, 2006), as shown in Fig. 2c. During inference time, the accuracy and computational cost of NSM remains constant, regardless of the spatiotemporal resolution of the domain. NSM exhibits a 10× increase in speed when compared to a numerical solver with a comparable level of accuracy. Solution error distributions and additional details are in §C.2. Figure 3: Navier-Stokes equation with $\nu = 10^{-4}$. In (a) and (b), the relative error and PDE residual on the test set are plotted. NSM achieves low $L_2$ error and PDE residual $100\times$ faster than FNO + PINN methods, and is an order of magnitude more accurate. (c) NSM captures fine features of the vorticity evolution accurately, while the grid-based approach fails to predict the overall shape. ### 4.3 Navier-Stokes Equation We study the vorticity form of the 2D periodic Navier-Stokes (NS) equations: $$ \partial_t w + u \cdot \nabla w = \nu \Delta w + f, \quad x \in T^2, \ t \in [0, T], \\ w(x, 0) = w_0(x), \quad x \in T^2, $$ where $\nu$ is the viscosity, $w$ is the vorticity field, and $f$ is the forcing term. The initial vorticity $w_0$ is sampled from a random field with a length scale of 0.8. Given $w$, the velocity field $u$ is determined by applying the inverse Laplacian operator. The model is trained to learn the evolution of vorticity. We first consider the unforced flow with different $\nu$ values and $T = 3s$. The solution is diffusive for large $\nu$, and becomes more challenging to learn as the viscosity decreases, due to the sharp features in the solution. For FNO trained with the PINN loss, using a grid resolution larger than 96 becomes intractable to train, due to the cost of compute and memory. The results are summarized in Table 3, and the case for $\nu = 10^{-4}$ is shown in Fig. 3. NSM significantly outperforms grid-based FNO with the PINN loss in terms of both error and computational speed, achieving accurate results $100\times$ faster (see Fig. 3a and Fig. 3b). Next, we consider the long temporal transient flow under the forcing term $f = 0.1(\sin(2\pi(x + y)) + \cos(2\pi(x + y)))$ and $T = 50s$, following the setting in Li et al. (2020, 2021). This is a significantly more challenging task, as it requires propagating the initial condition over an extended time interval. Nevertheless, as summarized in Table 4, NSM maintains high accuracy, while grid-based FNO with the PINN loss collapses during training and fails entirely. Further details for both unforced and forced flow can be found in §C.3. ### Conclusion We introduce an ML approach for solving PDEs, inspired by spectral methods. By utilizing orthogonal basis functions and their spectral properties, we demonstrate numerous advantages for learning PDEs in the spectral domain. Our method is evaluated on different PDEs, and achieves significantly lower error and increased efficiency when compared to current ML methods. | $\nu$ | FNO×64³+PINN | FNO×96³+PINN | NSM (ours) | |---------|--------------|--------------|------------| | $10^{-2}$ | 8.18 ± 2.83 | 7.90 ± 0.57 | 0.71 ± 0.02 | | $10^{-3}$ | 14.81 ± 0.67 | 11.99 ± 0.86 | 1.65 ± 0.26 | | $10^{-4}$ | 17.88 ± 2.67 | 16.20 ± 0.61 | 3.53 ± 0.53 | | $\nu$ | FNO×96³+PINN | NSM (ours) | |---------|--------------|------------| | $1/500$ | 55.1 ± 17.4 | 13.2 ± 0.57 | Acknowledgements. This work was supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program under contract No. DE-AC02-05CH11231. It was also supported in part by the Office of Naval Research (ONR) under grant N00014-23-1-2587. We also acknowledge generous support from Google Cloud and AWS Cloud Credit for Research. We thank Rasmus Malik Høegh Lindrup and Sanjeev Raja for helpful discussions and feedback. REFERENCES Mark Ainsworth and Justin Dong. Galerkin neural networks: A framework for approximating variational equations with error control. *SIAM Journal on Scientific Computing*, 43(4):A2474–A2501, 2021. John P Boyd. *Chebyshev and Fourier spectral methods*. Courier Corporation, 2001. James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax. Joan Bruna, Benjamin Peherstorfer, and Eric Vanden-Eijnden. Neural galerkin scheme with active learning for high-dimensional evolution equations. *arXiv preprint arXiv:2203.01360*, 2022. Gideon Dresdner, Dmitrii Kochkov, Peter Norgaard, Leonardo Zepeda-Núñez, Jamie A Smith, Michael P Brenner, and Stephan Hoyer. Learning to correct spectral methods for simulating turbulent flows. *arXiv preprint arXiv:2207.00556*, 2022. Lawrence C Evans. *Partial differential equations*, volume 19. American Mathematical Society, 2022. Vladimir Fanaskov and Ivan Oseledets. Spectral neural operators. *arXiv preprint arXiv:2205.10573*, 2022. Avner Friedman. *Partial differential equations of parabolic type*. Courier Dover Publications, 2008. Somdatta Goswami, Aniruddha Bora, Yue Yu, and George Em Karniadakis. Physics-informed neural operators. *arXiv preprint arXiv:2207.05748*, 2022. David Gottlieb and Steven A Orszag. *Numerical analysis of spectral methods: theory and applications*. SIAM, 1977. Gaurav Gupta, Xiongye Xiao, and Paul Bogdan. Multiwavelet-based operator learning for differential equations. *Advances in neural information processing systems*, 34:24048–24062, 2021. Jiequn Han, Arnulf Jentzen, and Weinan E. Solving high-dimensional partial differential equations using deep learning. *Proceedings of the National Academy of Sciences*, 115(34):8505–8510, 2018. Juncai He, Lin Li, Jinchao Xu, and Chunyue Zheng. Relu deep neural networks and linear finite elements. *arXiv preprint arXiv:1807.03973*, 2018. Dmitrii Kochkov, Jamie A Smith, Ayya Alieva, Qing Wang, Michael P Brenner, and Stephan Hoyer. Machine learning–accelerated computational fluid dynamics. *Proceedings of the National Academy of Sciences*, 118(21):e2101784118, 2021. Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural operator: Learning maps between function spaces. *arXiv preprint arXiv:2108.08481*, 2021. Aditi Krishnapriyan, Amir Gholami, Shandian Zhe, Robert Kirby, and Michael W Mahoney. Characterizing possible failure modes in physics-informed neural networks. *Advances in Neural Information Processing Systems*, 34:26548–26560, 2021.
sTf7mXhTVt
To minimize $F(x_0 + \delta, y)$ under some constraints, the authors solve the problem (5) derived from inequalities originating from the smoothness of $F$ and the Lipschitz continuity of the gradient. However, it is important to note that problem (5) does not necessarily entail the minimization of $F(x_0 + \delta, y)$.
Query Efficient Black-Box Adversarial Attack with Automatic Region Selection Anonymous authors Paper under double-blind review Abstract Deep neural networks (DNNs) have been shown to be vulnerable to black-box attacks in which small perturbations are added to input images without accessing any internal information of the model. However, current black-box adversarial attack methods are limited to attacks on entire regions, pixel-wise sparse attacks, or region-wise attacks. In this paper, we investigate region-wise adversarial attacks in the black-box setting, using automatic region selection and controllable imperceptibility. Technically, we formulate the problem as an optimization problem with $\ell^0_G$ and $\ell_\infty$ constraints. Here, $\ell^0_G$ represents structured sparsity defined on one collection of groups $G$, which can automatically detect the regions that need to be perturbed. We solve the problem using the algorithm of natural evolution strategies with search gradients. If $G$ is non-overlapping, we provide a closed-form solution to the first-order Taylor approximation of the objective function with the search gradient having $\ell^0_G$ and $\ell_\infty$ constraints ($FTAS_{\ell^0_G+\ell_\infty}$). If $G$ is overlapping, we provide an approximate solution to $FTAS_{\ell^0_G+\ell_\infty}$ due to its NP-hard nature, using greedy selection on the collection of groups $G$. Our method consists of multiple updates with the closed-form/approximate solution to $FTAS_{\ell^0_G}$. We provide the convergence analysis of the solution under standard assumptions. Our experimental results on different datasets indicate that we require fewer perturbations compared to global-region attacks, fewer queries compared to region-wise attacks, and better interpretability into vulnerable regions which is not possible with pixel-wise sparse attacks. 1 Introduction Deep neural networks (DNNs) have gained significant attention and are widely adopted in various applications, including computer vision [He et al., 2017; 2016], security systems [Kang & Kang, 2016; Xibilia et al., 2020], natural language processing [Bahdanau et al., 2016; Joshi et al., 2019], and autonomous driving [Bojarski et al., 2016; Levinson et al., 2011; Xiong et al., 2019]. However, extensive experiments have revealed that DNNs are susceptible to adversarial attacks, where well-designed small perturbations can deceive the models [Cai et al., 2021; Cheng et al., 2018; Su et al., 2019; Zhao et al., 2019]. The methods of adversarial attack can be classified into two main categories: white-box and black-box attacks. White-box attacks assume access to the target model, enabling the attacker to directly update adversarial examples using the gradients of the model [Dong et al., 2020; Fan et al., 2020; Kazemi et al., 2023; Zhu et al., 2021]. However, in numerous real-world scenarios, models are inaccessible, rendering gradient calculations impossible. In such situations, black-box attackers aim to approximate gradients by querying the target network to obtain output predictions for input samples. This paper focuses on discussing black-box attacks. Currently, there is a considerable amount of research dedicated to studying the adversarial vulnerability of networks in the black-box setting. The majority of these studies primarily focus on developing attacks [Ilyas et al., 2018a; b; Tu et al., 2019; Zhao et al., 2020] that target entire regions. Specifically, ZO-NGD [Zhao et al., 2020], which imposes an $\ell_\infty$ constraint, incorporates the zeroth-order gradient estimation technique and the second-order natural gradient to generate imperceptible perturbations on the entire image. [Ilyas et al., 2018a] proposed a method based on Natural Evolutionary Strategies to estimate the gradient under $\ell_\infty$ constraint. In the next year, they further proposed based on prior information to improve the query efficiency [Ilyas et al., 2018b]. However, since global perturbation alters the statistical characteristics of the entire image, it may in- Figure 1: A demonstration of adversarial examples and the corresponding perturbations generated by our method, Patch-RS, and Square Attack. Our method effectively identifies the region containing the target within the perturbed image and generates perturbations that align better with the target’s location. Patch-RS drew a conspicuous patch on the image. Square Attack with $\ell_\infty$ constraint does not have any image structure, and the perturbation is obvious. Introduce abnormal visual effects. These effects have the potential to be detected not only by defense mechanisms but also by human observers. In addition to zeroth-order optimization to estimate gradient, there is also a heuristic search method in the black-box attack. For instance, Square Attack (Andriushchenko et al., 2020) is based on a random search scheme, which selects local square updates at random locations so that the perturbation in each iteration is approximately located at the boundary of the feasible solution. But it cannot be ignored that it will cause more noise in the large region even the entire image, which potentially makes the perturbations more visually apparent. Parsimonious Attack (Moon et al., 2019) divides the image into some blocks according to some coarse grid. Then it performs a local heuristic search in a low-dimensional space among the vertices of the $\ell_\infty$ ball. Differently, pixel-wise sparse attacks (de Vazelhes et al., 2022; Croce & Hein, 2019; Tian et al., 2022) focus on identifying pixels that contribute significantly to the attack and independently applying perturbations to these pixels. Since natural images often exhibit a local smoothness property from a statistical perspective, the addition of perturbations usually disrupts this local smoothness property, rendering the perturbations more easily detectable by defense mechanisms. Recently, region-wise attacks have been proposed, allowing attackers to exploit vulnerabilities in specific regions or input areas. By understanding the model’s behavior with specific regions, attackers can design targeted perturbations to manipulate the model’s predictions in a desired manner. However, existing black-box region-wise attacks usually achieve bad performance in terms of success attack rate due to the nature of the black-box setting and unreasonable regional selection. For instance, Croce et al. (Croce et al., 2022) perturb only the 2-pixel wide edges of the original image, or add a patch at arbitrary locations based on a heuristic random search. Therefore, how to find the region that can highly improve the success attack rate has become a crucial problem in region-wise attacks. To address this challenge, we propose an approach for black-box attacks, which automatically detects the relevant regions based on a reliable standard instead of a fixed region or heuristics. As shown in Fig. 1, we find that FTAS produces perturbations that fit the target, outperforming Patch-RS and Square Attack methods in terms of perturbation quality and suitability. Different from heuristic-based approaches, we formulate the problem as an optimization problem with $\ell_G^0$ and $\ell_\infty$ constraints technically. Here, $\ell_G^0$ represents structured sparsity defined on one collection of groups $G$, which can automatically detect the regions that need to be perturbed. We solve the problem using the algorithm of natural evolution strategies with search gradients. Specifically, if $G$ is non-overlapping, we provide a closed-form solution to the first-order Taylor approximation of the objective function with the search gradient having $\ell_G^0$ and $\ell_\infty$ constraints (FTAS). If $G$ is overlapping, we provide an approximate solution to FTAS due to its NP-hard nature, using greedy selection on the collection of groups $G$. In addition, we provide a geometric convergence rate in Theorem 2 under the standard assumptions. We conduct experiments on different datasets that demonstrate the proposed method requires fewer perturbations and queries compared to global-region and region-wise attacks respectively, and provide better interpretability and insights into vulnerable regions than pixel-wise sparse attacks. 2 ADVERSARIAL ATTACK WITH AUTOMATICAL REGION DETECTION In this section, we begin with a concise overview of adversarial attacks. Subsequently, we present our novel framework for adversarial attacks, which incorporates an automated region detection mechanism. A visual comparison between ours and heuristic methods is shown in Fig. 2. Figure 2: Comparison between automatic region selection attacks and heuristic method in the ImageNet dataset. The top half of the figure displays the perturbation sampled to determine the position and color. The bottom half showcases our method, automatically selecting multiple subregions. 2.1 ADVERSARIAL ATTACK Let \( C(x) : \mathbb{R}^d \rightarrow \mathbb{R}^K \) be a well-trained DNN classification model, where \( x \in [0, 1]^d \) represents the original sample (If \( x \) is an image, we have \( d = w \times h \times c \), where \( w \) denotes the image width, \( h \) denotes the image height, \( c \) denotes the number of color channels), and \( K \) denotes the number of image classes. The goal of adversarial attacking is to find a small perturbation \( \delta \in \mathbb{R}^d \) for a given image \( x_0 \) belonging to class \( y_0 \in \{1, 2, \cdots, K\} \) such that the model \( C \) classifies the new image \( x_0 + \delta \) into a targeted class \( y (y \neq y_0) \). Formally, the objective is to find: \[ \arg\max_{k=1,2,\cdots,K} C_k(x_0 + \delta) = y \quad \text{s.t.} \quad \| \delta \|_p \leq \varepsilon, \quad 0 \leq x_0 + \delta \leq 1 \] We denote by \( \varepsilon > 0 \) the maximal allowable perturbation under the \( \ell_p \) norm. In practice, the \( \ell_p \) norm is often replaced by the \( \ell_2 \) or \( \ell_\infty \) norm (Carlini & Wagner, 2017; Ilyas et al., 2018a,b; Zhao et al., 2020). 2.2 OBJECTIVE FOR ADVERSARIAL ATTACK WITH AUTOMATICAL REGION DETECTION In this subsection, we propose a new objective function for adversarial attack with automatic region detection. In order to detect the region automatically, an additional \( \ell_G^0 \) norm group constraint is added. Then, the adversarial attack problem can then be reformulated as follows: \[ \min_{\delta} f(x_0 + \delta, y) \quad \text{s.t.} \quad \| \delta \|_G^0 \leq k, \quad \| \delta \|_\infty \leq \varepsilon, \quad 0 \leq x_0 + \delta \leq 1, \] where \( f(\cdot) \) is margin loss function (Carlini & Wagner, 2017), \( k \) is group sparsity of perturbation, \( \varepsilon \) is the magnitude of the perturbation. In the targeted attack scenario, \( y \) is the targeted class (\( y \neq y_0 \)), which is the true class (\( y = y_0 \)) in the untargeted attack scenario. The definition of \( \| \delta \|_G^0 \), the number of non-zero groups in a vector, is as follows. **Definition 1.** Suppose \( G = \{G_1, \ldots, G_M\} \) is a set of \( M \) groups that can arbitrarily overlap, \( G_i \subseteq [d] \) and \( \bigcup_{i=1}^{M} G_i = \{1, 2, \ldots, d\} \). We use \( \mathbb{B}^M \) to represent the space of \( M \)-dimensional binary vectors and define \( \iota : \mathbb{R}^d \rightarrow \mathbb{B}^M \), for any \( \delta \in \mathbb{R}^d \), \( \iota(\delta)_i = 1 \) if \( \delta_i \neq 0 \) and \( \iota(\delta)_i = 0 \) otherwise. We define the incidence matrix \( A^G \in \mathbb{B}^{d \times M} \): \( A^G_{ij} = 1 \) if \( i \in G_j \) and \( A^G_{ij} = 0 \) otherwise. The group \( \ell_G^0 \) norm is defined as \[ \| \delta \|_G^0 := \min_{a \in \mathbb{B}^M} \left\{ \sum_{j=1}^{M} a_j : A^G a \geq \iota(\delta) \right\}, \] where \( A^G a \geq \iota(\delta) \) means that \( \text{supp}(\delta) \subseteq \bigcup_{a_j=1} G_j \). 3 PROPOSED METHOD In this section, we introduce our proposed approach to address the problem (1). We outline the key steps involved in our method. Firstly, we employ the natural evolutionary strategy to estimate the gradient. Secondly, we reframe the objective problem by employing the first-order Taylor approximation. Lastly, we present a comprehensive algorithmic description, providing a step-by-step account of our method. 3.1 Natural Evolutionary Strategy To develop an effective technique, one intuitive strategy is to employ gradient-based methods for generating adversarial examples while minimizing query requirements. Thus, we use the Natural Evolutionary Strategy (NES) \cite{Wierstra2014}, which is a derivative-free optimization approach centered around a search distribution framework. Specifically, given a current point $x$, we utilize a search distribution $\pi(\theta|x)$ to generate a new point $\theta$ from $x$ based on this distribution. Instead of directly minimizing the loss function $f$, we focus on minimizing the expected value $F$ of the loss function under the search distribution $\pi(\theta|x)$. This expected value is defined as follows: $$F(x, y) := \mathbb{E}_{\pi(\theta|x)}[f(\theta, y)] = \int f(\theta, y)\pi(\theta|x)d\theta,$$ Next, we can compute the gradient of $F(x, y)$ with respect to $x$ using the following approach \cite{Ilyas2018}: $$\nabla_x F(x, y) = \mathbb{E}_{\pi(\theta|x)}[f(\theta, y)\nabla_x \log(\pi(\theta|x))].$$ (3) Following the methodology employed in \cite{Ilyas2018, Wierstra2014, Ye2019}, we select a point near $x$ by introducing Gaussian noise. Specifically, we employ the central difference sampling method to reduce variance. By evaluating the gradient using these $n$ samples, we obtain a variance-reduced gradient estimation, which can be expressed as follows: $$g = \frac{1}{n} \sum_{i=1}^{n/2} \frac{f(x + \sigma \tau_i, y) - f(x - \sigma \tau_i, y)}{\sigma} \tau_i,$$ where $\tau \sim \mathcal{N}(0, I)$, $\sigma$ is the variance. It is evident that the gradient estimation $g$ is an unbiased estimate of $\nabla_x F(x, y)$, meaning that $\mathbb{E}[g] = \nabla_x F(x, y)$. 3.2 Sequential Approximation and Solutions to Each Subproblem We now introduce an efficient approach to minimize $F$ with $\ell_0^G$ and $\ell_\infty$ constraints, utilizing our gradient estimation to obtain an approximate or closed-form solution for $F$. Let’s assume that $F$ is a nonconvex function with smoothness. Given the current point $x = x_0 + \delta$, we have the following relationship: $$F(x_0 + \delta, y) \leq F(x_0 + \delta^t, y) + \nabla_x F(x_0 + \delta^t, y)^T (\delta - \delta^t) + \frac{L}{2} \|\delta - \delta^t\|_2^2,$$ Obviously, to minimize the right-hand side of the inequality, we can solve the following sequential subproblem for each given $\delta^t$: $$\min_{\|\delta\|_0^G \leq k, l \leq \delta \leq u} \frac{L}{2} \|\delta - S_L(\delta^t)\|_2^2,$$ where $S_L(\delta^t) = \delta^t - \frac{1}{L} \nabla F(x_0 + \delta^t, y)$. (4) To simplify the objective, we combine the second and third constraints into a range $\delta \in [l, u]$, where $l = \max(-\varepsilon, -x_0)$ and $u = \min(\varepsilon, 1-x_0)$ since they are both box constraint. And $\|\cdot\|$ denotes $\|\cdot\|_2$ for simplify in this paper. Then, we discuss how to solve each subproblem (4) in the non-overlapping and overlapping settings, respectively. Non-overlapping groups. For non-overlapping groups, we provide a closed-form solution in the Theorem 1. The details of the proof are given in the Appendix B.1. Note that, the closed-form solution can also be obtained by Algorithm 2. **Theorem 1.** Let $\Pi_{[l,u]}(\cdot)$ denote the projection onto $[l,u]^d$. We define $\overrightarrow{\text{Dis}}$ as a group of some independent $\text{Dis}_j$, so we have $$\text{Dis}_j = [\Pi_{[l,u]}(S_L(\delta^t))]_j^2 - 2[\Pi_{[l,u]}(S_L(\delta^t))]_j S_L(\delta^t)_j,$$ $\overrightarrow{\text{Dis}} := \Pi_{[l,u]}(\delta) \odot (\Pi_{[l,u]}(\delta) - 2\delta) \odot I_G.$ $\pi(\cdot)$ denotes the indices that sort $\overrightarrow{\text{Dis}}$ in increasing order as groups. The $I_G \in \mathbb{R}^d$ is a boolean map to indicate the position of a set of perturbations. It is denoted as $I_G(i) = 1$, if $i \in G$, and 0 otherwise. The analytical solution under non-overlapping group sparse constraint can be obtained that $(i \in \{1, 2, \cdots, M\})$ $$\delta^{t+1}_{G_i} = \begin{cases} [\Pi_{[l,u]}(S_L(\delta^t))]_{G_i}, & i = \pi(1), \pi(2), \cdots, \pi(k); \\ 0, & \text{otherwise}. \end{cases}$$ Algorithm 1 FTAS\(_{\ell_0 + \ell_\infty}\) **Input:** Initial image \(x_0\), target class \(y_t\), classifier \(C(y|x)\), sparsity \(k\), learning rate \(\eta\), number of samples \(n\), search variance \(\sigma\) **Output:** Adversarial image \(x_{adv}\) with \(\|x_{adv} - x_0\|_0^G \leq k\), \(\|x_{adv} - x_0\|_\infty \leq \varepsilon\) 1: \textbf{Init} \(x_{adv}, \delta^t, t\) 2: \textbf{while} \(\max_y C(y|x_{adv}) \neq y_t\) \textbf{do} 3: \hspace{1em} \textbf{for} \(i = 1\) \textbf{to} \(n/2\) \textbf{do} 4: \hspace{2em} \(\tau_i \leftarrow N(0, I)\) 5: \hspace{2em} \(g_i = \frac{1}{2\sigma}(f(x_{adv} + \sigma \tau_i, y_t) - f(x_{adv} - \sigma \tau_i, y_t)) \tau_i\) 6: \hspace{1em} \textbf{end for} 7: \hspace{1em} \(g = \frac{1}{n} \sum_{i=1}^{n} g_i\) 8: \hspace{1em} \(\tilde{\delta}^{t+1} = \delta^t - \eta g\) 9: \hspace{1em} \(\overrightarrow{\text{Dis}} = \Pi_{[l,u]}(\tilde{\delta}^{t+1}) \odot (\Pi_{[l,u]}(\tilde{\delta}^{t+1}) - 2\tilde{\delta}^{t+1}) \odot I_G\) 10: \hspace{1em} \(\triangleright \odot\) denotes the Hadamard product 11: \hspace{1em} \(\delta^{t+1} = \Pi_{[l,u]}(P_k^G(\overrightarrow{\text{Dis}}, \tilde{\delta}^{t+1}))\) \triangleright Algorithm 2 12: \hspace{1em} \(x_{adv} = x_0 + \delta^{t+1}\) 13: \hspace{1em} \(t = t + 1\) 14: \textbf{end while} **Overlapping groups.** For overlapping groups, we propose an approximate solution outlined in Algorithm 2. In each iteration of the greedy selection process, we choose a group based on the \(\overrightarrow{\text{Dis}}\) value as defined in Theorem 1. For instance, if a pixel is initially selected and belongs to multiple groups, the \(\overrightarrow{\text{Dis}}\) value for other groups containing this pixel will be recalculated during the subsequent steps. Additionally, to avoid redundancy, perturbation points that have already been selected are not chosen again in subsequent iterations. The details of the proof are given in the Appendix B.2. 3.3 ALGORITHM In this section, we present our algorithm for solving problem (4), named FTAS\(_{\ell_0 + \ell_\infty}\) (First-order Taylor Approximation Strategy with \(\ell_0^G\) and \(\ell_\infty\) constraints). The pseudocode of FTAS\(_{\ell_0 + \ell_\infty}\) is presented in Algorithm 1. In Line 1 in Algorithm 1, the initial value of \(\delta\) is a random variable under a uniform distribution, and then the desired \(k\) group of perturbations is selected according to \(\overrightarrow{\text{Dis}}\). Each iteration of our algorithm consists of two steps: (i) the NES gradient estimation step (Lines 3-7), and (ii) calculating the solution of each subproblem step, where the NES gradient estimation step is the one described in subsection 2.1. Calculating the solution of each subproblem can also be divided into three steps in implementation: (i) perform gradient updating on \(\delta^k\), (ii) calculate \(\overrightarrow{\text{Dis}}\) according to Line 9 of Algorithm 1, (iii) get the smallest \(k\) groups index according to the value of \(\overrightarrow{\text{Dis}}\), reserve the corresponding index \(\tilde{\delta}^{t+1}\), and the others are 0, and (iiii) clip the result in \(\max\{l, \min\{u, \tilde{\delta}^{t+1}\}\}\) to get the \(\Pi_{[l,u]}(\tilde{\delta}^{t+1})\). Details of the implementation of step (iii) are shown in Algorithm 2. This ensures that all iterations of perturbations are under the \(k\)-groups sparsity and within the \(\ell_\infty\) constraint. In Algorithm 2, we select the group with minimum \(\overrightarrow{\text{Dis}}\) greedily. For non-overlapping groups, we can obtain the closed-form solution in Theorem 1. On the other hand, given a vector \(\delta \in \mathbb{R}^d\) that requires projection onto the constraint set \(\|\delta\|_0^G \leq k\) and \(l \leq \delta \leq u\), we encounter an NP-hard problem when \(G\) constrains arbitrary overlapping groups, rendering it challenging to solve the problem (4). We can obtain an approximate solution from Algorithm 2 and provide theoretical guarantees under the standard assumptions when applied to the overlapping group sparsity problem. The convergence analysis is detailed in the Appendix C. 4 THEORETICAL PERFORMANCE BOUNDS In the following, we present the convergence analysis for Algorithm 1. First, we give two important assumptions used in our analysis. **Assumption 1.** (RSC/RSS). The function \(f : \mathbb{R}^d \rightarrow \mathbb{R}\) satisfies the restricted strong convexity (RSC) and restricted strong smoothness (RSS) of order \(k^* + k\), which can be expressed as the following: \(\alpha_{k^* + k}I \preceq H(\delta) \preceq L_{k^* + k}I\), where \(H(\omega)\) is the Hessian of \(f\) at any \(\delta \in \mathbb{R}^d\) s.t. \(\|\delta\|_0^G \leq k^* + k\). RSC and RSS conditions have been widely studied in high dimensional statistical theory [Raskutti et al., 2010; Loh & Wainwright, 2013; Agarwal et al., 2010]. They guarantee that the objective function behaves like a strongly convex and smooth function over a sparse domain even if the function is non-convex. **Assumption 2.** \( f(x_0 + \delta, y) \) is bounded on its domain, that is, there exists a generic constant \( B > 0 \) such that: \( \forall \delta \in \mathbb{R}^d, l \leq \delta \leq u : |f(x_0 + \delta, y)| \leq B \). Based on the above assumptions, we can now offer theoretical assurances for Algorithm 1. **Theorem 2.** Let \( \delta^* \) denote the optimal solution to the problem (1), \( k^* \) denotes \( \| \delta^* \|_G \), we can set \( \hat{k} = O(k^* \log (\| \delta^* \| / \xi)) \) (to ensure that for all \( \tilde{\delta}, \xi \geq e^{-\frac{k}{k^*}} \| \tilde{\delta} \|_2 \), \( \eta = \frac{1}{L_{k+k^*}} \), under Assumptions 1 and 2, we have a geometric convergence rate, of the following form: \[ E \| \delta_T - \delta^* \| \leq \rho^T E \| \delta_0 - \delta^* \| + \left( \frac{1}{1 - \rho} \right) \cdot (a + b + c), \] where \( \rho = \left( 1 + \sqrt{\frac{k^*}{k-k}} \right) \left( 1 - \frac{\alpha_{k+k^*}}{L_{k+k^*}} \right) \), \( a = \frac{1+\sqrt{k^*/(k-k)}}{L_{k+k^*}} \cdot (\sqrt{dL_{k+k^*}} \sigma + \frac{\sqrt{dB}}{\sqrt{n}\sigma}) \), \( c = \sqrt{\frac{k^*}{k-k}} \), \( b = \frac{1+\sqrt{k^*/(k-k)}}{L_{k+k^*}} \cdot \max \{ \| \nabla f(\delta^*) \|_G \|_2 \mid G = \bigcup_{j=1}^{\hat{k}} G_j, G_j \in G, \tilde{k} \leq k + k^* \} \). **Remark 1.** Let \( k = O(L_{k+k^*}^2 \alpha_{k+k^*}^2 k^* + \hat{k}) \) and set \( \sigma, n \) appropriately, then the output of Algorithm 1 after \( T = O(L_{k+k^*}^2 \alpha_{k+k^*}^2 \cdot \log \frac{1}{\xi}) \) iterations satisfies \[ \| \delta_T - \delta^* \| \leq 3\xi + \frac{L_{k+k^*}^2}{\alpha_{k+k^*}^2} b. \] The approximation errors of the quantity \( a \) in Theorem 2 are induced by two factors: the first one is the approximation of the true function of \( f(x, y) \) by the function \( F(x, y) \), and the second one is the approximation of \( \nabla F(x, y) \) via sample average approximations. If \( \arg \min_\delta f(x_0 + \delta, y) \in [l, u]^d \) and \( \| \arg \min_\delta f(x_0 + \delta, y) \|_G^2 \leq k \), then \( b = 0 \). \( \xi \) comes from the greedy hard-threshold process in Algorithm 2, in theory \( \xi \) can be arbitrarily small if we set \( \hat{k} = O(k^* \log (\| \delta^* \| / \xi)) \). And if \( G \) is non-overlapped, \( c = \xi = 0 \). As we reduce \( \xi \), the value of \( \hat{k} \) increases, so does \( k \), which leads to an increase in \( b \). So there is a trade-off between the estimation error represented by \( \xi \) and the model selection error indicated by \( b \). We prove the theorem and the remark in the Appendix C. ### 5 EXPERIMENTS In this section, we conduct a comprehensive comparison of our proposed method with black-box adversarial attack methods in the targeted scenario (see Appendix D.6 for untargeted results). Firstly, querying a model will cost expensive money and resources in the real world. Thus, we are interested in query-efficient algorithms for generating adversarial examples. On the CIFAR10 and MNIST datasets, we set the maximum number of queries to 10k for untargeted scenarios, 20k for targeted scenarios, and 20k and 40k for the ImageNet dataset. Secondly, we also pay attention to the imperceptibility of perturbations. We add another \( \ell_\infty \) norm to control how visible perturbations are to the human eye. Thus, our proposed method provides better structure and insights into vulnerable regions compared to single-constrained attacks. #### 5.1 Baseline Methods In this section, we conduct a comprehensive evaluation of the performance between our proposed method and various attack modes, including global, region-wise, and pixel-wise sparse attack modes. Specifically, we consider two types of global attacks: gradient estimation methods and heuristic methods. For gradient estimation, we compare with the Zeroth-Order Natural Gradient Descent attack (ZO-NGD) [Zhao et al., 2020], which imposes an \( \ell_\infty \) constraint. On the other hand, Parsimonious Attack [Moon et al., 2019] and Square Attack [Andriushchenko et al., 2020] with the boundary of \( \ell_\infty \) constraint are used as heuristics to compare our algorithm. Both attacks operate on the entire image region. We provide detailed analysis and results by comparing ASR, \( \ell_0, \ell_2, \) and \( \ell_\infty \) norm to demonstrate that we just need fewer perturbations at a similar success rate. In addition to the global attacks, we also evaluate our proposed method under region-fixed and region-wise attack modes. Specifically, we conduct Fixed-ZO-NGD, Fixed-Square, and Fixed-Parsimonious, which focus on a fixed region of the image. We evaluate ASR, average, and median queries to demonstrate that our method has a higher success rate with fewer queries than state-of-the-art attack algorithms under fixed regions. Furthermore, we compare with the Patch-RS in Sparse-RS (Croce et al., 2022) within the same perturb pixels, which heuristically found the location of the patch. In addition, we utilize visual presentation to demonstrate the structure of our perturbation set compared with pixel-wise sparse attack methods SZOHT (de Vazelhes et al., 2022). Each of these attack types has its advantages and disadvantages, as summarized in Tab. ???. Our evaluation provides insights into the effectiveness and better structure of our proposed method over other different attack modes. Due to space limitations, we defer the results on the ImageNet dataset and ablation experiments to Appendix D.7 and Appendix E. ### Table 1: Characteristics of different types of attacks | Attack Type | Description | Visibility | Objective | |-----------------|----------------------------------|------------|------------------------------------------------| | Global | Alters entire image uniformly. | Highly | Degrade overall image quality. | | Regional | Targets specific areas. | Moderately | Conceal or alter specific parts. | | Sparse Pixel-wise | Alters a few scattered pixels. | Least | Sparse and conspicuous disturbance. | ### 5.2 Result and Analysis As demonstrated in the previous section, we conduct a comprehensive comparison of global attack methods including Parsimonious, Square $\ell_\infty$, and ZO-NGD attacks. As shown in Tab. 2 and Fig. 3, For region-wise attacks, we compare with region-fixed global algorithms and Patch-RS of Sparse-RS. It was shown in Tab. 3 and Fig. 4. For pixel-wise sparse attack mode, we give a visual presentation in Fig. 5. **Global attack mode:** As shown in Tab. 2, we can obtain that our method exhibits a more significant performance than other algorithms when the proportion of the disturbed image reaches 100%, that is the global perturbation. And achieve a similar ASR to Square $\ell_\infty$ and better than ZO-NGD at a 30% ratio of perturbation. From Fig. 3, we can see that we need to strike a balance on the constraint boundary to get low queries and high ASR. Before we reach a 100% perturbing ratio, we can outperform other algorithms in ASR and Avg. query performance. We generate sparse and imperceptible perturbations through controllable constraints, and under tight query budgets, we can achieve higher ASR by perturbing fewer pixels than global perturbations. **Region-wise attack mode:** In Tab. 3, Patch-RS achieved a great performance by heuristically drawing a square patch on the image, but this patch is easily detectable to the human eye. Our overlapping group algorithm performs better than others. From Fig. 4, we can see that when $\varepsilon$ reaches $0.5 \sim 0.6$, the performance of our overlapping group both in Avg. query and ASR exceeds that of Patch-RS. As can be seen from the $\ell_2$ and $\ell_\infty$ distance, the imperceptibility of our method is stronger. In the region-fixed attack mode, we maintain the same constraint, i.e., the same amount of perturbation and the same magnitude of perturbation. As shown in Tab. 4, our algorithm is much better than other region-fixed algorithms under the same strict constraints and query budget. For both the MNIST and CIFAR10 datasets, the object of interest typically occupies a significant portion of the Table 2: Comprehensive comparison of global attack algorithms with $\ell_\infty$ constraints on CIFAR10 and MNIST, where $\varepsilon = 0.4$ in MNIST, $\varepsilon = 0.1$ in CIFAR10. | Algorithm | ASR | Avg. | Med. | $\ell_0$ | $\ell_2$ | ASR | Avg. | Med. | $\ell_0$ | $\ell_2$ | |--------------------|--------|------|------|----------|----------|--------|------|------|----------|----------| | Parsimonious | 96.00% | 1164.3 | 212.0 | 3061.0 | 5.4 | 88.74% | 2318.4 | 65.0 | 227.7 | 5.5 | | Square$\ell_\infty$ | 90.69% | 1405.5 | 103.0 | 3053.9 | 5.4 | 98.07% | 419.5 | 103.0 | 478.6 | 8.3 | | ZO-NGD | 75.20% | 6105.2 | 707.0 | 3055.3 | 5.4 | 96.90% | 522.9 | 101.0 | 469.6 | 8.2 | | Ours(N)100%d | 99.40% | 963.4 | 387.5 | 3051.4 | 5.1 | 98.66% | 286.4 | 93.5 | 511.9 | 8.0 | | Ours(O)100%d | 98.67% | 980.9 | 379.0 | 3057.7 | 5.0 | 99.03% | 412.9 | 104.5 | 541.0 | 7.8 | | Ours(N)30%d | 90.13% | 4684.3 | 1353.5 | 923.4 | 2.9 | 87.28% | 4531.8 | 1076.0 | 197.9 | 5.3 | | Ours(O)30%d | 91.61% | 4999.0 | 1470.5 | 908.4 | 2.8 | 77.40% | 6290.3 | 1581.0 | 176.4 | 4.9 | *(N) Non-overlapping groups; (O) Overlapping groups; Number%d: the proportion of perturbed image features. Table 3: Comprehensive comparison of Sparse-RS (Patch-RS) algorithms with $\ell_0$ constraints on MNIST and CIFAR10. The perturbation ratio of the image is 10% of all features for all algorithms. | Algorithm | ASR | Avg. | Med. | $\ell_2$ | $\ell_\infty$ | ASR | Avg. | Med. | $\ell_2$ | $\ell_\infty$ | |--------------------|--------|------|------|----------|---------------|--------|------|------|----------|---------------| | Patch-RS | 92.51% | 1954.0 | 74.0 | 8.9 | 0.9 | 88.45% | 2953.8 | 146.0 | 6.3 | 0.9 | | Ours(N)$_{\varepsilon=1}$ | 93.16% | 2386.7 | 1714.0 | 8.4 | 0.9 | 99.25% | 931.2 | 373.5 | 7.9 | 1.0 | | Ours(O)$_{\varepsilon=1}$ | 99.08% | 1537.7 | 352.0 | 12.6 | 1.0 | 100.00% | 359.3 | 142.5 | 10.7 | 1.0 | | Ours(N) | 72.60% | 8960.2 | 4822.0 | 2.8 | 0.2 | 82.00% | 5450.5 | 1266.5 | 4.1 | 0.5 | | Ours(O) | 84.58% | 6638.4 | 3369.0 | 3.5 | 0.2 | 91.51% | 3513.0 | 1009.0 | 4.9 | 0.5 | *(N) Non-overlapping groups; (O) Overlapping groups; Our without footnotes indicates $\varepsilon = 0.2$ for CIFAR10, $\varepsilon = 0.5$ for MNIST. image. Consequently, selecting the attack region fixed at the center of the image yields better results compared to other locations. To showcase the exceptional efficiency of our algorithm comprehensively, we provide additional results for perturbations in other locations in the Appendix E.1. Figure 4: Average query count and Attack Success Rate (ASR) achieved by our algorithm on MNIST ($M$) and CIFAR10 ($C$) datasets under different disturbance amplitude. Only 10% of dimensions have been perturbed on both datasets. The total number of maximum perturbed pixels is the same for all algorithms. **Pixel-wise attack mode:** In the last column of Fig. 5, we present the visual renderings of the adversarial examples generated by SZOHT. It is evident that SZOHT introduces sparse perturbations across global pixels. But intuitively, these perturbations may not exhibit a direct connection to the target class. Figure 5: Adversarial examples and the corresponding perturbation on ImageNet datasets crafted by all baseline methods when attacking the Inception-v3 model in the black-box setting with a random-selected target. Table 4: Comprehensive comparison of region-fixed targeted attack algorithms with $\ell_{0+\infty}$ constraints on MNIST and CIFAR10, where $\varepsilon = 0.4$ in MNIST, $\varepsilon = 0.1$ in CIFAR10. The perturbation ratio of the image is 10% of all features for all algorithms. | Algorithm | ASR | Avg. | Med. | ASR | Avg. | Med. | |--------------------|-------|-------|-------|-------|-------|-------| | Fixed-Parsimonious | 26.00%| 15062.5| 20000.0| 6.79% | 18717.1| 20000.0| | Fixed-Square/$\ell_\infty$ | 57.56%| 11599.6| 8754.0| 17.38%| 16630.1| 20000.0| | Fixed-ZO-NGD | 33.50%| 10688.3| 20000.0| 36.12%| 14573.9| 20000.0| | Ours(N)$_{10\%d}$ | 63.77%| 9944.6 | 7512.0 | 59.70%| 9476.7 | 4768.0 | | Ours(O)$_{10\%d}$ | 60.65%| 10488.5| 7892.0 | 50.13%| 11289.9| 19936.5| * (N) Non-overlapping groups; (O) Overlapping groups; Number%d: the proportion of perturbed image features. Region-wise attack on ImageNet: In this study, we present the performance of region-wise adversarial attacks on the ImageNet dataset, which are concisely summarized in the subsequent table. A detailed analysis can be obtained in Part D in the appendix. It was observed that attacks targeting high-resolution images exhibit lower success rates and necessitate a greater number of queries, particularly when subject to double constraints. The complexity of high-resolution images and intricate network architectures underscore the need for more refined optimization strategies in adversarial attacks. Notably, in scenarios involving non-overlapping groups, our proposed methodology demonstrates a distinct advantage by offering a closed-form solution, in contrast to the heuristic approach. Table 5: Performance of region-based attack patterns on the ImageNet dataset | Algorithm | Inception-v3 | ViT-B/16 | |--------------------|--------------|----------| | | ASR | Avg. | Med. | ASR | Avg. | Med. | | Patch-RS | 92.29%| 2968.6| 685.5 | 86.90%| 3572.8| 1849.5| | Ours(N)$_{\varepsilon=1}$ | 98.95%| 2756.8| 312.0 | 98.89%| 2682.5| 1478.0| | Ours(O)$_{\varepsilon=1}$ | 98.02%| 1927.4| 406.5 | 99.84%| 3896.7| 1008.5| | Fixed-Parsimonious | 74.46%| 9631.3| 7543.0| 85.36%| 8248.5| 4952.5| | Fixed-Square/$\ell_\infty$ | 75.53%| 8893.3| 2984.5| 78.95%| 8624.8| 2286.0| | Fixed-ZO-NGD | 79.12%| 7436.0| 1500.0| 80.98%| 9426.0| 3789.5| | Ours(N)$_{10\%,\varepsilon=0.1}$ | 76.89%| 7202.8| 5117.7| 77.26%| 6796.3| 5470.0| | Ours(O)$_{10\%,\varepsilon=0.1}$ | 83.15%| 5298.4| 2965.0| 89.40%| 6300.5| 3238.0| 6 CONCLUSION In conclusion, we presented a novel approach for region-wise adversarial attacks in the black-box setting. By utilizing automatic region selection, and controllable imperceptibility, our proposed method showed improved effectiveness and interpretability compared to existing attack modes. Experimental evaluations demonstrated that the method required fewer perturbations and queries while achieving higher success rates. We provide valuable insights into understanding vulnerable regions and enhancing the robustness of deep neural networks against adversarial attacks. Of course, we acknowledge that different groupings will have a great impact on the results, and in the future, we will explore the combination of this method with techniques such as image segmentation and principal component analysis to explore the robustness and fragility of neural networks from more perspectives. REFERENCES Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large-scale machine learning. In 12th {USENIX} Symposium on Operating Systems Design and Implementation (OSDI’16), pp. 265–283, 2016. Alekh Agarwal, Sahand Negahban, and Martin J Wainwright. Fast global convergence rates of gradient methods for high-dimensional statistical recovery. Advances in Neural Information Processing Systems, 23, 2010. Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, and Matthias Hein. Square attack: A query-efficient black-box adversarial attack via random search. In Computer Vision – ECCV 2020, pp. 484–501, Cham, 2020. Springer International Publishing. Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, and Yoshua Bengio. End-to-end attention-based large vocabulary speech recognition. In 2016 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp. 4945–4949. IEEE, 2016. Albert S Berahas, Liyuan Cao, Krzysztof Choromanski, and Katya Scheinberg. A theoretical and empirical comparison of gradient approximations in derivative-free optimization. Foundations of Computational Mathematics, 22(2):507–560, 2022. Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316, 2016. HanQin Cai, Yuchen Lou, Daniel McKenzie, and Wotao Yin. A zeroth-order block coordinate descent algorithm for huge-scale black-box optimization. In International Conference on Machine Learning, pp. 1193–1203. PMLR, 2021. Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pp. 39–57. Ieee, 2017. Minhao Cheng, Thong Le, Pin-Yu Chen, Jinfeng Yi, Huan Zhang, and Cho-Jui Hsieh. Query-efficient hard-label black-box attack: An optimization-based approach. arXiv preprint arXiv:1807.04457, 2018. Francesco Croce and Matthias Hein. Sparse and imperceivable adversarial attacks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4724–4732, 2019. Francesco Croce, Maksym Andriushchenko, Naman D Singh, Nicolas Flammarion, and Matthias Hein. Sparse-rs: a versatile framework for query-efficient sparse black-box adversarial attacks. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6):6437–6445, 2022. William de Vazelhes, Hualin Zhang, Huimin Wu, Xiao-Tong Yuan, and Bin Gu. Zeroth-order hard-thresholding: Gradient error vs. expansivity. arXiv preprint arXiv:2210.05279, 2022. Xiaoyi Dong, Dongdong Chen, Jianmin Bao, Chuan Qin, Lu Yuan, Weiming Zhang, Nenghai Yu, and Dong Chen. Greedyfool: Distortion-aware sparse adversarial attack. Advances in Neural Information Processing Systems, 33:11226–11236, 2020. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021. Yanbo Fan, Baoyuan Wu, Tuanhui Li, Yong Zhang, Mingyang Li, Zhifeng Li, and Yuju Yang. Sparse adversarial attack via perturbation factorization. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXII 16, pp. 35–50. Springer, 2020.
KqTzfiNjWU
If I understand well, DPS is adapted to the proposed framework. But does this make sense? In particular, while I understand that DPS is not well suited for dehazing as well as deraining, it performs rather well on deblurring - but Figure 7 shows no difference between input images and output images for DPS. Could the authors comment on that?
RESTORER GUIDED DIFFUSION MODELS FOR VARIATIONAL INVERSE PROBLEMS Anonymous authors Paper under double-blind review ABSTRACT Diffusion models have made remarkable progress in solving various inverse problems, attributing to the generative modeling capability of the data manifold. Posterior sampling from the conditional score function enable the precious data consistency certified by the measurement-based likelihood term. However, most prevailing approaches confined to the deterministic deterioration process of the measurement model, regardless of variational unpredictable disturbance in real-world sceneries. To address this obstacle, we show that the measurement-based likelihood can be replaced with restoration-based likelihood in the opposite probabilistic graphic direction, licencing the patronage of various off-the-shelf restoration models and extending the strict deterministic deterioration process to the tolerant cluster process with supposed prototype, in what we call restorer guidance. Particularly, assembled with versatile prototypes optionally, we can resolve inverse problems with bunch of choices for assorted sample quality and realize the proficient deterioration control with assured realistic. We show that our work can be formally analogous to the transition from classifier guidance to classifier-free guidance in the field of inverse problem solver. Experiments on multifarious inverse problems demonstrate the effectiveness of our method, including image de-hazing, rain streak removal, and motion deblurring. Code will be available soon. 1 INTRODUCTION “Mille viae ducunt homines per saecula Romam.” Liber Parabolarum Alani Diffusion models [Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2020] have recently emerged as impressive generative models with promising performance on various applications such as image generation [Rombach et al., 2022; Zhang & Agrawala, 2023; Saharia et al., 2022], image editing [Meng et al., 2021; Brooks et al., 2023; Ruiz et al., 2023], video generation [Ho et al., 2022], speech synthesis [Huang et al., 2022], and 3D generative modeling [Poole et al., 2022; Tewari et al., 2023]. Apart from that, diffusion models are also served as competitive candidates for inverse problem solver, which aim at reversing the deterioration process from the contaminated measurement $y$ to original complete signal $x$ [Chung et al., 2022; 2023b; Song et al., 2023]. Solving inverse problems with diffusion models can be crafted in multiform frameworks. Bayesian approach incorporates the gradients from the measurement-based likelihood, i.e., $\nabla_x \log p(y|x)$, forming the conditional score function for posterior sampling, and the data consistency can be ensured with the dependency derived from the measurement model $H$. Representative methods [Chung et al., 2022; 2023b; Song et al., 2023] progressively extend the diffusion solvers with linear, non-linear, or even non-differentiable measurement models for increasingly complicated inverse problems. Beyond the Bayes’ formula, there are broad range of alternatives delivering the balance between data fidelity and realistic for solving inverse problems, such as range-null space decomposition [Wang et al., 2023] and heuristic energy function with configured properties [Fei et al., 2023; Zhao et al., 2022]. These methods can be comfortably adapted to multifarious inverse problems without retraining the diffusion model. However, it is worth noting that most prevailing approaches confined to the deterministic deterioration process of the measurement model, mostly involving the digitized deterioration such as image inpainting, image colorization, and phase retrieval, regardless of variational unpredictable disturbance in real-world sceneries, including but not limited to variational weather conditions [Zhu et al., 2023] or manual destruction [Köhler et al., 2012]. Figure 1: Visual illustration of various likelihood terms in prevailing diffusion-based inverse problem solvers. Compared to the deterministic deterioration process of the measurement-based likelihood, the generative-based and restoration-based likelihood are capable of handling variational deterioration process with reliable likelihood derived from the congruous deterioration process and the measurement, while the generative-based likelihood further restricted to the rigid formulation. Another line of works [Stevens et al., 2023; Chung et al., 2023a] introduce the generative-based likelihood with parallel diffusion models for signal \( x \) and deterioration parameters in measurement model \( H \), and jointly estimate their score functions for posterior sampling, which release the deficiency of the deterministic deterioration process with bestowed variational capability. Additionally, [Laroche et al., 2023] alternately estimates the measurement parameters and data distribution under the traditional iterative optimization framework in the same spirit. However, these methods remain in the paradigm of the measurement-based likelihood, and confined to the rigid formulation of the measurement model for signal formation, a.k.a., convolution, addition, and multiplication, with merely estimated deterioration parameters, which inevitably restricts their variational capability for more complicated sceneries. Moreover, it is noteworthy that aside from the aforementioned pros. and cons. of various likelihood terms, the coupled learning of the measurement model is necessary to be realized on-the-fly, which is substantially time-consuming and inconvenient to deploy. In this work, we extend prevailing diffusion solvers for variational inverse problems beyond the restriction of deterministic deterioration process without any extra training. In the context of Bayes’ framework, we show that the measurement-based likelihood can be replaced with restoration-based likelihood in opposite probabilistic graphic direction, forming the reliable conditional score function for posterior sampling, in what we call restorer guidance. Compared with measurement-based likelihood, restorer guidance licences the patronage of various off-the-shelf restoration models, and implicitly extends the strict deterministic process in measurement-based likelihood to a cluster of deterioration processes with supposed restorer prototype for variational inverse problem solver. In Fig. 1, we further illustrate that the devil in measurement-based likelihood resides in the incongruous dependency between the forward deterioration process and the contaminated measurement, which can be properly resolved with tolerant cluster process derived from the restorer prototype for reliable likelihood. Assembled with versatile restorer prototypes optionally, we can resolve inverse problems with bunch of choices for assorted sample quality and realize the proficient deterioration control with assured realistic. We show that our work can be formally analogous to the transition from classifier guidance [Dhariwal & Nichol, 2021] to classifier-free guidance [Ho & Salimans, 2022] in the field of inverse problem solver. Note that our method is also compatible with other frameworks beyond Bayesian, such as range-null space decomposition (see Appendix B). Empirically, we demonstrate the effectiveness of our method on various variational inverse problems, including image dehazing, rain streak removal, and motion deblurring, and show that our restorer guidance is a competitive inverse problem solver. The restorer guidance is not only capable of exploiting the restoration capability conserved in restorers losslessly, but rather breaking the upper bound of the restorer for superior sample quality (Fig. 3). Moreover, restorer guidance is also favourable to the out-of-distribution deterioration with augmented cluster process. 2 BACKGROUND 2.1 SCORE-BASED DIFFUSION MODELS Score-based diffusion models smoothly transform data distribution to spherical Gaussian distribution with a diffusion process, and reverse the process with score matching to synthesize samples. The forward process \( \{x(t)\}_{t \in [0,T]} \), \( x(t) \in \mathbb{R}^D \), can be represented with the following Itô stochastic differential equation (SDE) \cite{song2020score}: \[ dx = f(x, t)dt + g(t)dw, \] where \( f(\cdot, t) : \mathbb{R}^D \rightarrow \mathbb{R}^D \) is the drift coefficient, \( g(t) \in \mathbb{R} \) is the diffusion coefficient, and \( w \in \mathbb{R}^D \) is the standard Wiener process (a.k.a., Brownian motion). Let \( p_t(x) \) denotes the marginal distribution of \( x(t) \). The data distribution is defined when \( t = 0 \), i.e. \( x(0) \sim p_{\text{data}} \), and the tractable prior distribution is approximated when \( t = T \), e.g. \( x(T) \sim \mathcal{N}(0, I) \). \( p_{0t}(x_t | x_0) \) denotes the transition kernel from \( x(0) \) to \( x(t) \). Note that we always have \( p_0 = p_{\text{data}} \) by forward definition. Samples from \( p_t(x) \) can be simulated via the associated reverse-time diffusion process of \cite{song2020score} solving from \( t = T \) to \( t = 0 \), given by the following SDE \cite{anderson1982optimal,song2020score} \[ d\bar{x} = [f(x, t) - g(t)^2 \nabla_x \log p_t(x)]dt + g(t)d\bar{w}, \] where \( \bar{w} \) is the reverse-time standard Wiener process, and \( dt \) is an infinitesimal negative timestep. The reverse process of \cite{song2020score} can be derived with the score function \( \nabla_x \log p_t(x) \) at each time \( t \), which is typically replaced with \( \nabla_{x(t)} \log p_{0t}(x(t) | x(0)) \) in practice, and is approximated via score-based model \( s_\theta(x(t), t) \) trained with denoising score matching objective \cite{vincent2011connection}: \[ \theta^* = \arg \min_\theta \mathbb{E}_{t \sim U(\varepsilon, 1), x(t) \sim p_{0t}(x(t) | x(0)), x(0) \sim p_{\text{data}}} \left[ \| s_\theta(x(t), t) - \nabla_{x(t)} \log p_{0t}(x(t) | x(0)) \|_2^2 \right], \] where \( \varepsilon \simeq 0 \) is a small positive constant. Score matching ensure the optimal solution \( \theta^* \) converges to \( \nabla_x \log p_t(x) \simeq s_\theta(x(t), t) \) with sufficient data and model capability. One can replace the score function in \cite{song2020score} with \( s_\theta(x(t), t) \) to calculate the reverse-time diffusion process \cite{song2020score} and solve the trajectory with numerical samplers, such as Euler-Maruyama, Ancestral sampler \cite{ho2020denoising}, probability flow ODE \cite{song2020score}, DPM-Solver \cite{lu2022dpm}, amounts to sampling from the data distribution \( p_{\text{data}}(x) \) with the goal of generative modeling. ### 2.2 Solving Inverse Problem with Diffusion Models Solving inverse problem with diffusion model leverage the implicit prior of the underlying data distribution that the diffusion model have been learned \cite{chung2022diffusion,chung2023diffusion,song2023score,stevens2023score}. Formed in the Bayes’ framework, we have \( p(x|y) = p(y|x)p(x)/p(y) \). Let \( y \) denotes the contaminated observation derived from the complete measurement \( x \), we can straightforward modify the unconditional score function in \cite{song2020score} with the following posterior formula, which similar to the classifier guidance \cite{dhariwal2021diffusion}: \[ \nabla_{x_t} \log p_t(x_t | y) = \nabla_{x_t} \log p_t(x_t) + \nabla_{x_t} \log p_t(y | x_t), \] where the prior term can be approximated via the pre-trained score model \( s_{\theta^*}(x_t, t) \), and the likelihood term can be acquired via the compound of the Tweedie’s formula \cite{efron2011tweedie} and the measurement model from \( x \) to \( y \) to ensure the data consistency. Simply replacing the score function in \cite{song2020score} with enable the conditional reverse-time diffusion process for posterior sampling: \[ dx = [f(x, t) - g(t)^2 (\nabla_{x_t} \log p_t(x_t) + \nabla_{x_t} \log p_t(y | x_t))] dt + g(t)d\bar{w}, \] where the first term promise the realistic powered by diffusion manifold constraint, and the second term ensure the data fidelity. It is worth noting that the likelihood can be further approximated with heuristic energy function with configured properties \cite{zhao2022energy,fei2023energy}. ### 3 METHODS #### 3.1 Approximating the Measurement-Based Likelihood Recall that the posterior sampling from the conditional score function require the likelihood term \( \nabla_{x_t} \log p_t(y | x_t) \) to provide the guidance which is intractable to compute. Pioneer works typically factorize \( p_t(y | x_t) \) with the marginalization over \( x_0 \), considering the underlying graphic model: \[ p(y | x_t) = \int_{x_0} p(y | x_0, x_t)p(x_0 | x_t)dx_0 = \int_{x_0} p(y | x_0)p(x_0 | x_t)dx_0, \] Note that \( x_t \) is independent of the measurement \( y \) when conditioned on \( x_0 \). In this way, we can accordingly approximating the \( p(x_0 | x_t) \) via one-step denoising process with Tweedie’s formula \cite{efron2011tweedie}, and solving the \( p(y | x_0) \) from the measurement model. Unfortunately, the prevalent measurement-based likelihood is restricted to the deterministic deficiency of the measurement model, impeding the diffusion solvers for variational inverse problems; detailed in Appendix D. 3.2 Restorer Guidance To address the abovementioned limitations, we show that the measurement-based likelihood can be replaced with restoration-based likelihood for data consistency, in what we call restorer guidance. Compared with measurement-based likelihood, the restorer guidance licencing the patronage of various off-the-shelf restoration models for powerful diffusion solvers, considering their comprehensive sensitivity to multifarious deterioration process. We first write the factorized restoration-based likelihood \( \hat{p}(x_t|y) \) as the following for comparison, and the modified conditional score function together with the restorer guided posterior sampling will be introduced later. \[ \hat{p}(x_t|y) = \int_{x_0} p(x_t|x_0,y)p(x_0|y)dx_0 = \int_{x_0} p(x_t|x_0)p(x_0|y)dx_0, \] where the measurement \( y \) is independent of \( x_t \) when conditioned on \( x_0 \). Note that the probabilistic graphic direction of Eq. 7 is opposite to the measurement-based likelihood (Eq. 6) for confident data consistency, as shown in Fig. 2. Solving \( p(x_0|y) \) with assorted restoration models \( R \) enable the establishment of variational cluster process. While the \( p(x_t|x_0) \) can be directly derived from the forward process, e.g., \( p(x_t|x_0) \sim N(\sqrt{\bar{\alpha}(t)}x_0,(1-\bar{\alpha}(t))I) \), in the case of VP-SDE or DDPM [Ho et al., 2020]. Therefore, we have \( \hat{p}(x_t|y) \sim N(\sqrt{\bar{\alpha}(t)}R(y),(1-\bar{\alpha}(t))I) \), considering the deterministic process of \( p(x_0|y) \). The score of the restoration-based likelihood can be written as: \[ \nabla_{x_t} \log \hat{p}(x_t|y) \simeq -\frac{1}{\sigma_t^2} \nabla_{x_t} \| x_t - \sqrt{\bar{\alpha}(t)}R(y) \|^2_2 \] where \( \sigma_t \) is exactly the standard deviation of \( p(x_t|y) \), and we discard it to transform the underlying distribution of the mean-reverting error (Eq. 8) from time-constant \( \epsilon_t \sim N(0,I) \) to time-dependent \( \epsilon_t \sim N(0,\sigma_t^2) \) for adaptive restorer guidance with relaxation related to the noise schedule. Another perspective is provided in Appendix A. Once we obtain the \( \nabla_{x_t} \log \hat{p}(x_t|y) \), we can freely plug it into the modified conditional score function for restorer guided posterior sampling. 3.3 Posterior Sampling from Restorer Guidance To enable the posterior sampling from the restorer guidance and forming the branded conditional score function, we rewrite the likelihood term in Eq. 9 as following via Bayes’ rule: \[ \nabla_{x_t} \log p_t(y|x_t) = \nabla_{x_t} \log \hat{p}_t(x_t|y) - \nabla_{x_t} \log p_t(x_t), \] which translates the measurement-based likelihood \( \nabla_{x_t} \log p_t(y|x_t) \) to restoration-based likelihood \( \nabla_{x_t} \log \hat{p}_t(x_t|y) \), where the resulting \( \nabla_{x_t} \log \hat{p}_t(x_t|y) \) is then used in \( \nabla_{x_t} \log p_t(y|x_t) \) when posterior sampling from diffusion solvers. Therefore, the conditional score function can be simply accessed by plugging in the derivation from Eq. 9 to Eq. 4. Considering the typical parameters \( w \) that controls the strength of the measurement-based guidance, i.e., \( w \nabla_{x_t} \log p_t(y|x_t) \), we have: \[ \nabla_{x_t} \log p_t(x_t|y) = (1-w)\nabla_{x_t} \log p_t(x_t) + w\nabla_{x_t} \log \hat{p}_t(x_t|y), \] where \( w \) is generally a positive number for smooth control between data consistency and realistic. In the context of the restoration-based likelihood, the data consistency is further exteriorized as restorer intensity to flexibly release the power of the restoration model. Substituting the derived restoration-based likelihood in Eq. 8 enable the posterior sampling from the restorer guidance. The conditional score function in Eq. 10 formally comes to be: \[ \nabla_{x_t} \log p_t(x_t|y) \simeq \eta s_{\theta^*}(x_t,t) - \rho \nabla_{x_t} \| x_t - \sqrt{\bar{\alpha}(t)}R(y) \|^2_2, \] where we release the strict constrain in Eq. 10 and set the parameters \( \eta \) and \( \rho \) as harmonic step size for the unconditional prior term and restoration-based likelihood term, considering the complicated balance between restorer intensity and data realistic countered by the diffusion model. Related to the classifier-free guidance. It is worth noting that the prevailing measurement-based likelihood is homologous to the classifier guidance [Dhariwal & Nichol, 2021], considering the same role of the classifier and the measurement model played in the conditional score function. Beyond, we show that the restorer guidance is formally analogous to the classifier-free guidance [Ho & Salimans (2022)] in terms of the likelihood decomposition (Eq. 9). While the difference lies in the conditional prior term $\nabla_{x_t} \log p_t(x_t | y)$ assumed in Eq. 9 resulting in the following score: $$\nabla_{x_t} \log p_t(x_t | y) = (w + 1)\nabla_{x_t} \log \hat{p}_t(x_t | y) - w\nabla_{x_t} \log p_t(x_t),$$ which is exactly the classifier-free guidance that sampling from the linear combination of the unconditional score and conditional score estimates. Compared with restorer guidance, the conditional score in Eq. 12 is provided by extra-trained conditional diffusion model, rather than arbitrary off-the-shelf restorers. It also explains why the constrain in Eq. 10 need to be released as the data realistic cannot be guaranteed by the restorer-based likelihood term, compared to the diffusion guidance. ### 3.4 Extension of the Restorer Guidance The restorer guidance of Eq. 11 presents conceptual transition from measurement-based likelihood to restoration-based likelihood ideologically, and we show that it can be further extended to release the great potential of alternative restorers for constructing powerful diffusion solvers. We here provide three major extensions for original restorer guidance in the following. **Step 1: Gradient orientation.** Apart from the measurement-based likelihood that the conditional gradients from $\nabla_{x_t} \log p_t(y | x_t)$ are traced back to the current $x_t$, the likelihood gradients in restorer guidance $\nabla_{x_t} \log p_t(x_t | y)$ can be solely dependent on the unconditional diffusion update, in virtue of the opposite probabilistic graphic direction. Therefore, the parallel gradient update in conditional score function can be replaced with serial update for efficient gradient orientation. Let $x'_t$ denotes the unconditional update of $x_t$, we can rewrite the Eq. 11 as following: $$\nabla_{x_t} \log p_t(x_t | y) \simeq \eta s_{\theta^*}(x_t, t) - \rho \nabla_{x'_t} \|x'_t - \sqrt{\alpha(t)} R(y)\|^2,$$ where we remain the weighting parameter $\sqrt{\alpha(t)}$, considering the harmonic step size of the unconditional diffusion model, and Eq. 13 can be approximately regarded as serial update for brevity, **Step 2: Restorer traveling.** The likelihood in original restorer guidance only involved $R(y)$ for the application of the restoration model, which is insufficient to release the great potential of alternative restorers for powerful solvers. Proceed from this limitation, we show that the restorer can be invoked recursively for optional choice, with the escort of the diffusion model. Besides the guidance provided from restorers, we explicitly apply the restoration model on the one-step denoising result $\hat{x}_{0|t}$ for reliable data consistency, forming the unconditional update of $x'_t$ in case of DDPM sampling as following: $$x'_t \leftarrow \frac{\sqrt{\alpha_t(1 - \bar{\alpha}_{t-1})}}{1 - \bar{\alpha}_t} x_t + \frac{\sqrt{\alpha_{t-1}\beta_t}}{1 - \bar{\alpha}_t} R(\hat{x}_{0|t}) + \tilde{\sigma}_t z, \quad z \sim N(0, I),$$ where we denote the $\alpha(t)$ as $\alpha_t$ for simplicity, and $\beta_t \triangleq 1 - \alpha_t$, $\tilde{\sigma}_t$ is the reverse diffusion variance. It is worth noting that the explicit restoration of $\hat{x}_{0|t}$ will not hinder the likelihood gradients from the restorer guidance, which can be solely dependent on the unconditional update $x'_t$ (Eq. 13). We Table 1: Quantitative comparison of solving variational inverse problems with competitive solvers. The baseline results of restorer prototype are in brown. **Bold**: best, _underline_: second best. | Method | Image Dehaze | Rain streak removal | Motion Deblur | |-------------------------|--------------|---------------------|---------------| | | PSNR ↑ SSIM ↑ FID ↓ LPIPS ↓ | PSNR ↑ SSIM ↑ FID ↓ LPIPS ↓ | PSNR ↑ SSIM ↑ FID ↓ LPIPS ↓ | | NAFNet (Chen et al., 2022) | 30.12 0.973 4.88 0.015 | 33.13 0.951 26.93 0.079 | 33.71 0.947 8.82 0.078 | | MPRNet (Zamir et al., 2021) | 27.33 0.962 8.46 0.023 | 34.95 0.959 26.86 0.073 | 32.66 0.936 10.98 0.089 | | IR-SDE (Lao et al., 2023) | 24.90 0.924 9.45 0.039 | 34.20 0.964 10.30 0.019 | 30.63 0.901 6.33 0.062 | | DPS (Cheng et al., 2023b) | 17.29 0.650 58.78 0.276 | 23.18 0.627 142.55 0.340 | 24.86 0.742 83.96 0.371 | | DDMS (Wang et al., 2023) | 12.68 0.556 31.72 0.217 | 12.96 0.453 178.24 0.366 | 25.52 0.752 60.83 0.304 | | Restorer guidance - Bayesian | **30.21** 0.975 **4.58** 0.013 | **33.54** 0.957 **25.71** 0.071 | **34.28** 0.953 **7.59** 0.064 | | Restorer guidance - Null-space | 30.17 0.973 4.71 0.014 | 33.42 0.952 26.15 0.074 | 33.96 0.951 8.23 0.076 | provide this extension for optional and the lightweight restorers will cause negligible computational burden, compared to the unconditional score model $s_{\theta^*}(x_t, t)$. **Step 3: Measurement boosting.** The restorer guidance presented so far only depend on the information provided from the restoration model, ignoring the original information possessed in the measurement $y$, which prone to lead the suboptimal prototype-biased solving results. To this end, we reformulate the conditional score function in Eq. (11) to incorporate the information across both sides of the restorer. Combining with above two extensions, we have the following complete conditional score function of the restorer guidance: $$\nabla_{x_t} \log p_t(x_t | y) \simeq \eta s_{\theta^*}(x_t, t) - \rho \nabla_{x_{t-1}} \| x_{t-1} - \sqrt{\alpha_t} R(y) \|_2^2 + \zeta \nabla_{x_{t-1}} \| x_{t-1} - \sqrt{\alpha_t} y \|_2^2,$$ (15) where $\zeta$ is a parameter that controls the strength of score derived from the measurement, $\zeta \ll \rho$, and we perform the gradient ascent in this term to boost the performance of the diffusion solver. We provide the full version of the posterior sampling from the complete conditional score function of the restorer guidance with DDPM sampler and DDIM sampler in Algorithm 1 and 2. ### 3.5 Application of the Restorer Guidance The restorer guidance release the deficiency of the measurement-based likelihood for variational inverse problems, with the acceding of assorted restoration models considering their comprehensive sensitivity to multifarious deterioration process. Aside from this, we show that the restorer guidance can further be applied to other cases with promising sample quality and advanced performance. **Deterioration control.** The step parameter of the restoration-based likelihood provides us the ability to flexibly control the restorer intensity with desired deterioration removal extent; see Fig. 4. Additionally, we show that the deterioration can further be strengthened with simply reversing the gradient directions of the likelihood terms in Eq. (15) resulting in the proficient deterioration control of both sides. The extension of the restorer traveling will be disabled in the case of deterioration control, while the sample realistic in deterioration strengthen can be assured with the diffusion model. **Out-of-distribution processing.** The restorer guidance is capable of handling out-of-distribution deterioration beyond the alternative restorers. Formally, in that case, the conditional gradients provided from the restoration-based likelihood is unreliable, on account of the unstable results of $R(y)$. We show that through restorer traveling and amplified measurement boosting, the performance of diffusion solvers on out-of-distribution deterioration can be significantly advanced; see Tab. 2. ### 4 Experiments We experimentally evaluate our restorer guidance on three variational inverse problems, including image dehazing, rain streak removal, and motion deblurring. The evaluated datasets include 500 images in SOTS-Outdoor (Li et al., 2018a), 100 images in Rain100L (Yang et al., 2017), and 1111 images in GoPro (Nah et al., 2017). The unconditional diffusion model is publicly available that pre-trained on ImageNet of size $256 \times 256$ without any finetuning (Dhariwal & Nichol, 2021). We adopt the DDIM sampler here, and our method can be accomplished within 10 steps for gratified sample quality. The alternative restorers can be selected from various image restoration models that pre-trained on the suggested problem-specific datasets for proficient guidance, including RESIDE-OTS (Li et al., 2018a), Rain-combine (Zamir et al., 2021), and GoPro (Nah et al., 2017) in our exper- Figure 3: Visual comparison of restorer guidance with other inverse problem solvers on variational deterioration processes, including image dehazing, rain streak removal, and motion deblurring. The restorer prototype is deployed with NAFNet for comparison. Best viewed zoomed in. We consider the following metrics including the Learned Perceptual Image Patch Similarity (LPIPS) [Zhang et al., 2018] and Fréchet Inception Distance (FID) [Heusel et al., 2017] for perceptual measurement, and Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) for distortion evaluation. The purpose of experiments is to understand the behavior and potential of the restorer guidance, and extend the prevailing diffusion solvers for unprecedented inverse problems beyond the measurement-based likelihood, not necessarily to push the sample quality metrics to state-of-the-art on these benchmarks. We perform comparison with following methods: Diffusion posterior sampling (DPS) [Chung et al., 2023b], denoising diffusion null-space model (DDNM) [Wang et al., 2023], Image restoration SDE (IR-SDE) [Luo et al., 2023], NAFNet [Chen et al., 2022], and MPRNet [Zamir et al., 2021]. NAFNet and MPRNet are general image restoration backbone, and IR-SDE is a task-specific diffusion solver. DPS and DDNM are measurement-based diffusion solvers for conditional posterior sampling under different frameworks. Considering the inherent deficiency, we parameterize the handcrafted measurement model in DPS and DDNM with network (i.e., NAFNet) for forward variational deterioration process, and the same network architecture is deployed as restorer prototype for comparison. 4.1 Quantitative results We show quantitative comparison results in Tab. 1, while the restorer guidance is steadily boosting the performance of the baseline restorer prototype, i.e., NAFNet, on all tasks, regardless of frameworks in Bayesian or range-null space decomposition [Wang et al., 2023]. This is far beyond exploiting the restoration capability conserved in restorers losslessly for visual applications, but rather breaking the upper bound of the restorer for more powerful inverse problem solvers, and also validates the compatibility of the restorer guidance with existing unconditional score model. On the other hand, despite the impressive performance the measurement-based methods achieved in solving deterministic inverse problems, the inherent deficiency is manifested when confronted with variational unpredictable deterioration processes. The likelihood derived from the incongruous measurement model and variational contaminated measurements in DPS and DDNM disable the solver behavior completely, compared to the restorer guidance which resolved with opposite probabilistic graphic direction of the likelihood. Note we refer the Bayesian version as default in the following. As presented in Sec. 3.5, the restorer guidance is capable of handling out-of-distribution deterioration beyond the incorporated restorers. We present the results of out-of-distribution validation in Tab. 2 and 3 for rain streak removal and motion deblurring, respectively. While the result for image dehazing can be found in Appendix C. In Tab. 2, the comparison methods are trained on Rain100L [Yang et al., 2017] while evaluated on Rain100H [Yang et al., 2017], differing from the deterioration strength. In Tab. 3, the comparison methods are trained on GoPro [Nah et al., 2017] while evaluated on RealBlur-J [Rim et al., 2020], differing from the underlying deterioration proto- Observing that the restorer guidance is expert at deterioration within the process prototype of the restorer, while releasing the constrain of the deterioration strength (Tab. 2). Moreover, deteriorations beyond the supposed process prototype can also be handled well (Tab. 3), with relatively modest improvement compared to the strength variation. Generally, restorer guidance extends the deterministic deterioration process to a cluster of deterioration processes with supposed prototype of the restorer, and enables the sustained release of the restorer capability for augmented cluster space. Table 2: Out-of-distribution validation of the restorer guidance. The comparison methods are trained on Rain100L [Yang et al., 2017] while evaluated on Rain100H [Yang et al., 2017]. | Methods | PSNR↑ | SSIM↑ | FID↓ | LPIPS↓ | |------------------|-------|-------|------|--------| | NLEDN [Li et al., 2018b] | 13.93 | 0.441 | 228.5 | 0.516 | | Restorer guidance | 16.06 | 0.458 | 215.2 | 0.454 | | PreNet [Ren et al., 2019] | 16.48 | 0.565 | 177.8 | 0.401 | | Restorer guidance | 19.00 | 0.587 | 159.9 | 0.352 | 4.2 QUALITATIVE RESULTS AND VISUAL APPLICATIONS We provide the visual comparison in Fig. 3 to validate the effectiveness and peculiarity of the restorer guidance qualitatively. Compared to the baseline restorer, the restorer guidance has following merits: (i) Rendering the reconstructed sample with visual pleasing sample quality (e.g., red tricycle), ascribing to the unconditional score model. (ii) Endowing the restoration process with generation capacity that synthesis the nebulous region heuristically (e.g., girl’s eye). (iii) Liberating the capability of the restorer continuously for obstinate deterioration (e.g., rain streaks) with ensured data realistic. Compared to measurement-based solvers, the restorer guidance is capable to provide more reliable likelihood guidance in variational deterioration process. In Fig. 4, we provide the visual comparison of restorer guidance on out-of-distribution deterioration. The comparison methods are exemplified as PreNet [Ren et al., 2019] for rain streak removal and Restormer [Zamir et al., 2022] for motion deblurring. The samples drawn from the restorer guidance exhibit the greater robustness to out-of-distribution deterioration, compared to the baseline restorers. The proficient deterioration control achieved by restorer guidance is shown in Fig. 5. While one can smoothly controls the restorer intensity via the harmonic step size for desired deterioration extent, and even reverses the restoration process for amplified deterioration. This also provides another perspective model with reversed restorers rather than handcraft deterministic preferences. Generally, restorer guidance provides us a workbench to fabricate the restoration process more flexibly. 4.3 ABLATION STUDIES We present the ablation experiments to validate the effectiveness of the suggested extensions attached to the restorer guidance. The ablations are performed on problems of rain streak removal and motion deblurring, with reported PSNR and FID metrics. In Tab. 4, we can see that restorer guidance attached with extensions further bursts the potential for powerful inverse problem solvers, which is also the key to break the upper bound of the incorporated restorer prototype. Note that the extension of the gradient orientation is adopted as default option to enable the restorer traveling and efficient sampling. Table 3: Out-of-distribution validation of the restorer guidance. The comparison methods are trained on GoPro [Nah et al., 2017] while evaluated on RealBlur-J [Rim et al., 2020]. | Methods | PSNR↑ | SSIM↑ | FID↓ | LPIPS↓ | |------------------|-------|-------|------|--------| | MPRNet [Zamir et al., 2021] | 26.46 | 0.820 | 34.26 | 0.156 | | Restorer guidance | 26.70 | 0.823 | 29.87 | 0.142 | | Restormer [Zamir et al., 2022] | 26.57 | 0.824 | 33.08 | 0.152 | | Restorer guidance | 26.74 | 0.826 | 29.65 | 0.143 | Table 4: Ablation experiments on major extensions attached to the restorer guidance. RT.: Restorer traveling. MB.: Measurement boosting. | RT. | MB. | Rain streak removal | Motion Deblur | |-----|-----|---------------------|--------------| | ✗ | ✗ | 33.06 | 26.98 | | ✔ | ✗ | 33.42 | 26.17 | | ✗ | ✔ | 33.27 | 26.68 | | ✔ | ✔ | 33.54 | 25.71 | Figure 4: Visual results of out-of-distribution validation of the restorer guidance. First row: Rain100H with PreNet restorer. Second row: RealBlur-J with Restormer restorer. 5 RELATED WORK Image restoration is the classical inverse problem with nondeterministic degradation process imposed on the complete signal, reversing the process with contaminated measurement poses challenges for the solver. Traditional methods incorporated various natural image priors to regularize the underlying solution space, including but not limited to sparse and low-rank prior [LeFkimmiatis & Koschev (2023), dark channel prior He et al. (2010), and deep generative priors Pan et al. (2021); Ulyanov et al. (2018)]. These methods confined to the deficiency of characterizing the natural image distribution comprehensively, and often resolve the inverse problem with insufficient regularization. Since Sohl-Dickstein et al. (2015) modeling intricate data distribution with inspired non-equilibrium thermodynamics, two successful classes of probabilistic generative models, denoising diffusion probabilistic models (DDPMs) Ho et al. (2020) and score matching with Langevin dynamics (SMLDs) Song & Ermon (2019) have been innovatively developed, which gradually perturb data with noise until tractable distribution and reverse the process with score matching or noise prediction for sampling. Song et al. (2020) amalgamates above two paradigms into a continuous generalized framework with stochastic differential equations. Aside from various generative applications, diffusion models have also been widely appreciated in solving inverse problems. The supervised works typically run the diffusion in the efficient space for deterioration modeling and efficient sampling, including residual space Luo et al. (2023); Yue et al. (2023), frequency space Cao et al. (2022), and latent space Xia et al. (2023). Another line of works adopt diffusion models as regularized priors for zero-shot problem solving, and inject the likelihood for conditional posterior sampling. Pioneer works Chung et al. (2022, 2023b); Song et al. (2023) embrace the Bayes’ framework and construct the measurement-based likelihood or generative-based likelihood Chung et al. (2023a); Stevens et al. (2023) for data consistency. Beyond that, Wang et al. (2023) leverage the framework of range-null space decomposition to deliver the balance between realistic and data consistency. However, these methods are confined to the deterministic deterioration process characterized by the measurement model, and impotent to variational unpredictable disturbance in real-world sceneries. 6 CONCLUSION AND DISCUSSION In this work, we proposed the restorer guidance for solving variational inverse problems, and shown that the measurement-based likelihood can be replaced with restoration-based likelihood in the opposite probabilistic graphic direction. The restorer guidance licencing the patronage of various off-the-shelf restoration models for powerful diffusion solvers, extending the strict deterministic deterioration process to the tolerant cluster process, while attached with extensions further release the great potential of our method. We show that our work is theoretically analogous to the transition from the classifier guidance to classifier-free guidance in the field of inverse problem solver. Extensive experiments illustrate the effectiveness of the restorer guidance. Despite the competitive performance and delightful convenience achieved by restorer guidance, it highly depends on the capability of the alternative restorer prototype as baseline performance, which prone to lead the suboptimal prototype-biased solving results. Beyond that, it supposed to incorporate miscellaneous restorer prototypes efficiently with allocated deterioration process to construct the unbiased restorer guidance and release the strong dependency from the single prototype in the future. Moreover, restorer guidance provides us a workbench to fabricate the restoration process more flexible and controllable with proficient deterioration knowledge, and it supposed to accomplish the interconnected deterioration process with discretionary user inclination in the future. REFERENCES Codruta O Ancuti, Cosmin Ancuti, and Radu Timofte. Nh-haze: An image dehazing benchmark with non-homogeneous hazy and haze-free images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp. 444–445, 2020. Brian DO Anderson. Reverse-time diffusion equation models. Stochastic Processes and their Applications, 12(3):313–326, 1982. Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18392–18402, 2023. Chentao Cao, Zhuo-Xu Cui, Shaonan Liu, Hairong Zheng, Dong Liang, and Yanjie Zhu. High-frequency space diffusion models for accelerated mri. arXiv preprint arXiv:2208.05481, 2022. Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration. In European Conference on Computer Vision, pp. 17–33. Springer, 2022. Hyungjin Chung, Byeongsol Sim, Dohoon Ryu, and Jong Chul Ye. Improving diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems, 35:25683–25696, 2022. Hyungjin Chung, Jeongsol Kim, Sehui Kim, and Jong Chul Ye. Parallel diffusion models of operator and image for blind inverse problems. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6059–6069, 2023a. Hyungjin Chung, Jeongsol Kim, Michael Thompson Mccann, Marc Louis Klasky, and Jong Chul Ye. Diffusion posterior sampling for general noisy inverse problems. In The Eleventh International Conference on Learning Representations, 2023b. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780–8794, 2021. Hang Dong, Jinshan Pan, Lei Xiang, Zhe Hu, Xinyi Zhang, Fei Wang, and Ming-Hsuan Yang. Multi-scale boosted dehazing network with dense feature fusion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2157–2167, 2020. Bradley Efron. Tweedie’s formula and selection bias. Journal of the American Statistical Association, 106(496):1602–1614, 2011. Ben Fei, Zhaoyang Lyu, Liang Pan, Junzhe Zhang, Weidong Yang, Tianyue Luo, Bo Zhang, and Bo Dai. Generative diffusion prior for unified image restoration and enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9935–9946, 2023. Kaiming He, Jian Sun, and Xiaoou Tang. Single image haze removal using dark channel prior. IEEE transactions on pattern analysis and machine intelligence, 33(12):2341–2353, 2010. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020. Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P Kingma, Ben Poole, Mohammad Norouzi, David J Fleet, et al. Imagen video: High definition video generation with diffusion models. arXiv preprint arXiv:2210.02303, 2022. Rongjie Huang, Max WY Lam, Jun Wang, Dan Su, Dong Yu, Yi Ren, and Zhou Zhao. Fast-diff: A fast conditional diffusion model for high-quality speech synthesis. arXiv preprint arXiv:2204.09934, 2022.
x5LvBK43wg
How does the graph construction technique manages the class imbalance that might be present in the unlabeled target data? Related to the discussion in section 3.2 about initialization of prototypes and constructing a prototype graph.
PROGRAM: PROtotype GRAph Model based Pseudo-Label Learning for Test-Time Adaptation Haopeng Sun1*, Lumin Xu3 Sheng Jin4,2 Ping Luo4,5 Chen Qian1,2 Wentao Liu2 1 Department of Computer Science and Technology, Tsinghua University, Beijing, China 2 SenseTime Research and Tetras.AI 3 The Chinese University of Hong Kong 4 The University of Hong Kong 5 Shanghai AI Laboratory {sunhaopeng, jinsheng}@tetras.ai qiancl8@mails.tsinghua.edu.cn ABSTRACT Test-time adaptation (TTA) aims to adapt a pre-trained model from a source domain to a target domain only using online unlabeled target data during testing, without accessing to the source data or modifying the original training process. Among the various TTA methods, pseudo-labeling has gained popularity. However, the presence of incorrect pseudo-labels can hinder the effectiveness of target domain adaptation. To overcome this challenge, we propose a novel TTA method, called PROtotype GRAph Model based pseudo-label learning (PROGRAM). PROGRAM consists of two key components: (1) Prototype Graph Model (PGM) for reliable pseudo-label generation; (2) Robust Self-Training (RST) for test-time adaptation with noisy pseudo-labels. PGM constructs the graph using prototypes and test samples, facilitating effective message passing among them to generate more reliable pseudo-labels. RST combines the advantages of consistency regularization and pseudo-labeling to achieve robust target domain adaptation in the presence of noisy pseudo-labels. Our proposed PROGRAM can be easily integrated into existing baselines, resulting in consistent improvement. Extensive experiments show that our PROGRAM outperforms the existing TTA methods on multiple domain generalization and image corruption benchmarks. 1 INTRODUCTION Deep neural network (DNN) based methods perform exceedingly well when training and testing data are sampled from the same distribution. However, under test-time domain shift [Pan & Yang, 2009; Gopalan et al., 2011], DNNs encounter significant performance degradation. Test-time adaptation (TTA) [Liang et al., 2023] is a prominent paradigm to alleviate this problem. TTA methods adapt the models trained from the source domain to the target domain at test-time, without accessing to labeled data from the target domain. A majority of TTA approaches adopt pseudo-labeling (PL) based methods, which first generate pseudo-labels and then adapt the model to the target domain via self-training. Traditional PL based methods [Wang et al., 2021a; Lee et al., 2013] directly use the output of the classifier as pseudo-labels, and reinforce the model to learn overly confident predictions for the test data. Due to inevitable noises contained in the pseudo-labels, such approaches might suffer from dramatic performance degradation [Mukhoti et al., 2020] or total model collapse [Wang et al., 2022b]. In order to generate more reliable pseudo-labels, we propose a novel TTA method, termed PROtotype GRAph Model based pseudo-label learning (PROGRAM). In order to generate credible pseudo-labels, prototype-based PL [Iwasawa & Matsuo, 2021; Wang et al., 2023; Liang et al., 2020], and nearest-neighbor based PL [Yang et al., 2021; Jang et al., 2023] have been proposed for performance improvement. However, these two types of approaches have... Figure 1: Different strategies of pseudo-labeling (PL): (a) Prototype-based PL. (b) Nearest-neighbor based PL. (c) PROtotype GRAph Model based pseudo-label learning (PROGRAM). The red lines represent the decision boundary. their own drawbacks. As shown in Fig. 1(a), prototype-based PL generates pseudo-labels based on global class prototype information but fails to incorporate local features of test samples, leading to incorrect predictions under domain shift. As shown in Fig. 1(b), nearest-neighbor based PL relies on local neighboring features of test samples for pseudo-label generation, overlooking the global representative features of each class. As the feature distribution of different categories may have overlaps under domain shift, test samples near the decision boundary are easily influenced by neighbors from other categories. To mitigate these problems, we propose Prototype Graph Model (PGM) to take both global and local information into consideration for pseudo-label generation. As shown in Fig. 1(c), our proposed method combines the advantages of both prototype-based PL and nearest-neighbor based PL, by incorporating both the global representative features of prototypes and the local information from neighboring test data in a flexible graph representation. In terms of the self-training stage, some approaches (Lee et al., 2013; Iwasawa & Matsuo, 2021; Jang et al., 2023) use hard pseudo-labels (i.e. one-hot encoding), while others (Wang et al., 2023) use soft pseudo-labels (i.e. probability encoding) for model fine-tuning. Hard labels help accelerate model training but are vulnerable to label noises, so it is generally necessary to set a handpicked threshold (Lee et al., 2013) to filter out unreliable pseudo-labels (Rizve et al., 2021). However, choosing a universally appropriate threshold is challenging for different datasets (Wang et al., 2022a). Too high threshold will discard some useful correct pseudo-labels, while too low threshold will involve noisy pseudo-labels. On the contrary, soft labels help improve the model generalization ability, but suffer from slower model convergence. In this paper, we propose Robust Self-Training (RST) technique to avoid the above pitfalls. When the pseudo-labels are reliable enough (the model predictions are consistent with pseudo-labels), we adopt hard pseudo-labels to accelerate model convergence without setting handpicked thresholds. Otherwise, we adopt soft pseudo-labels and use noise-resistant consistency regularization losses for model training. Our proposed RST effectively combines consistency regularization and pseudo-labeling, without tediously setting handcrafted threshold. We benchmark PROGRAM on four domain generalization datasets (i.e., VLCS (Torralba & Efros, 2011), PACS (Li et al., 2017), OfficeHome (Venkateswara et al., 2017) and TerraIncognita (Beery et al., 2018)) and three image corruption benchmarks (Hendrycks & Dietterich, 2019) including CIFAR-10C, CIFAR-100C, and ImageNet-C. Experiments show that PROGRAM outperforms other TTA methods and can be applied to various baselines to deliver consistent improvements. Our main contributions can be summarized as follows: • We propose a novel TTA method called PROtotype GRAph Model based pseudo-label learning (PROGRAM) to address the issue of noisy pseudo-labels. Extensive experiments on popular domain generalization and image corruption benchmarks demonstrate the superiority of PROGRAM over the previous state-of-the-art TTA methods. • In the pseudo-label generation stage, we propose Prototype Graph Model (PGM) that combines the advantages of both prototype-based PL and nearest-neighbor based PL to produce reliable pseudo-labels. It incorporates both global representative features of prototypes and local information from neighboring test data in a flexible graph representation. • In the self-training stage, we propose Robust Self-Training (RST) that combines pseudo-labeling and consistency regularization. It benefits from both hard and soft pseudo-labels to make the self-training process more stable and robust. 2 RELATED WORK 2.1 TEST-TIME ADAPTATION Test-time adaptation (TTA) (Liang et al., 2023) involves adapting a pre-trained model using online unlabeled test data only. It is closely related to two research topics, i.e. test-time training (TTT) and source-free domain adaptation (SFDA). TTT methods (Gidaris et al., 2018; Liu et al., 2021; Sun et al., 2020b) also optimize during testing, however, they require to alter training procedure by introducing the proxy loss on source data. SFDA methods (You et al., 2021b; Wang et al., 2021b; Yan et al., 2021; Liang et al., 2020; Morerio et al., 2020; Kurmi et al., 2021; Zhou et al., 2022; Tang et al., 2021) also adapt without source data, however, they optimize offline with full access to the whole target test data. In contrast, TTA does not need any specific modifications during the training phase and only requires the pre-trained source model and unlabeled target data during the testing phase, which is a more practical and feasible setting. To adapt a pre-trained model to an unlabeled target domain, a majority of TTA methods take inspiration from the semi-supervised learning field and employ various prevalent techniques tailored for unlabeled test data adaptation. Existing works of TTA mainly include batch norm calibration (Mirza et al., 2022; Schneider et al., 2020; Zhao et al., 2023; Gong et al., 2022; You et al., 2021a), consistency regularization (Boudiaf et al., 2022; Choi et al., 2022; Kojima et al., 2022; Döbler et al., 2023), and pseudo-labeling (Iwasawa & Matsuo, 2021; Jang et al., 2023; Li et al., 2023; Wang et al., 2023). Our work belongs to the pseudo-labeling based approach, which fine-tunes a pre-trained model using pseudo-labels based on classifier predictions. T3A (Iwasawa & Matsuo, 2021) designs the prototype-based classifier and predicts the labels of test data by comparing distances between the test data and the pseudo-prototypes. TAST (Jang et al., 2023) improves upon T3A by fine-tuning the pre-trained model through self-training with nearest neighbor information. In comparisons, we propose a graph based approach to incorporate both global representative features of prototypes and local nearest neighboring information. 2.2 SEMI-SUPERVISED LEARNING Recent semi-supervised learning methods can be mainly categorized into consistency regularization based methods (Bachman et al., 2014; Sajjadi et al., 2016; Laine & Aila, 2016; French et al., 2017) and pseudo-labeling based methods (Lee et al., 2013; Rizve et al., 2021; Arazo et al., 2020; Wang et al., 2022a). Consistency regularization (CR) based methods assume that the output of the model should be invariant to random perturbations including data augmentation (French et al., 2017) and stochastic regularization (Laine & Aila, 2016; Sajjadi et al., 2016). Pseudo-labeling (PL) based methods use the model itself to produce artificial labels for unlabeled data. More recently, FixMatch (Sohn et al., 2020) produces artificial labels using both CR and PL to boost the performance of semi-supervised learning. However, it requires complicated augmentation designs of CR and handpicked thresholds of PL. In comparison, our proposed RST is simple and effective, which combines CR and PL in the context of TTA to handle noisy label learning. Our proposed PGM is inspired by graph based semi-supervised learning which applies graph Laplacian regularization (Zhu et al., 2003; Yang et al., 2016) or contrastive regularization (Wan et al., 2021; Sun et al., 2019; 2020a) for representation learning with consistency regularization. However, these works primarily focus on modeling the relationship among labeled data and unlabeled data. In contrast, we build connections among prototypes and test data to obtain more reliable pseudo-labels. To the best of our knowledge, we are the first to apply graph based approach to TTA to facilitate effective message passing. 3 METHODOLOGY 3.1 PROBLEM DEFINITION Test-time adaptation involves fine-tuning a source domain pre-trained model using the online unlabeled test data from the target domain. Given the model trained using standard empirical risk minimization (Chowdhary, 2020) on source data $D_s$, a batch of unlabeled target data $x_i \in D_t$, where $i \in \{1, \ldots, N\}$ and $N$ denotes batch size, is sampled from the target domain $D_t$ for domain adaptation. The model consists of a feature extractor $f_\theta$ and a classifier $g_\phi$. During testing, the model $h = f_\theta \circ g_\phi$ is initialized with pre-trained parameters. $K$ is the number of classes. 3.2 Overview In this section, we introduce our proposed method for test-time adaptation, termed PROotype GRAph Model based pseudo-label learning (PROGRAM). PROGRAM consists of two main component including Prototype Graph Model (PGM) and Robust Self-Training (RST) as shown in Fig. 2. For each batch of test samples, PGM first initializes the prototypes according to the weights of classifier $g_\phi$, and constructs prototype graph among both prototypes and test samples via conditional probability distributions with respect to the prototypes. Label propagation is conducted in the constructed prototype graph and reliable pseudo-labels for the test samples are obtained. In RST, pseudo-labels or consistency regularization is utilized for model fine-tuning. When the hard labels of model predictions and pseudo-labels obtained by PGM are inconsistent, consistency loss is applied to exploit the noisy pseudo-labels. The final results are estimated using the model fine-tuned by RST. PROGRAM eliminates the interference of noisy pseudo-labels and adapts the source model to the target domain for robust predictions. 3.3 Prototype Graph Model (PGM) Prototype Graph Construction. Similar to T3A [Iwasawa & Matsuo, 2021], we initialize prototypes using the model weights of classification layer as $c_k = \frac{w_k}{\|w_k\|_2}$, where $w_k$ is the $k$-th element of the weight matrix in the classifier $g_\phi$, representing the template of $k$-th class. For each test sample $x_i$, we compute a soft label $s_i = [s_{i1}, s_{i2}, ..., s_{iK}]^\top \in \mathbb{R}^K$ with regard to prototypes using the softmax function: $$s_{ik} = \frac{\exp(c_k \cdot x'_i)}{\sum_{k=1}^{K} \exp(c_k \cdot x'_i)},$$ where $x'_i$ denotes the feature representation of $x_i$ obtained from the feature extractor $f_\theta$. By calculating the soft labels of the entire batch of $N$ test samples, we obtain a sample label matrix $S = [s_1, s_2, ..., s_N]^\top \in \mathbb{R}^{N \times K}$ for the batch. In order to represent the labels for prototypes, we define a prototype label matrix $T \in \mathbb{R}^{K \times K}$, where one-hot encoding scheme is utilized that is $T_{ij} = 1$ if the $i$-th prototype corresponds to the class $j$, and $T_{ij} = 0$ otherwise. In general, $T$ is a diagonal matrix with ones on the diagonal when the order of prototypes is sequential. We concatenate $T$ and $S$ along the row axis to form the label matrix $Z \in \mathbb{R}^{(K+N) \times K}$. Prototypes can be regarded as a good representation of the class centers. Instead of only considering the connections among test samples, we propose prototype graph to capture the relationships among test samples and prototypes. We construct a graph $G = (\mathcal{V}, \mathcal{E})$, where the vertices $\mathcal{V}$ represent both the prototypes $v_i (1 \leq i \leq K)$ and the batch of unlabeled test samples $v_i (K + 1 \leq i \leq K + N)$, and the edges $\mathcal{E}$ modeling their relationship are represented by an adjacency matrix $W \in \mathbb{R}^{(K+N) \times (K+N)}$. We compute the similarity $w_{ij}$ between the vertex $v_i \in \mathcal{V}$ and vertex $v_j \in \mathcal{V}$ to determine their connectivity with respect to the prototypes $v_k (1 \leq k \leq K)$. In the prototype-based graph, we leverage a small number of prototypes to turn sample-to-sample affinity computations into much simpler sample-to-prototype interactions. Similar ideas have also been exploited in (Zhu & Koniusz, 2023) for transductive few-shot learning. Specifically, the Markov random walks (Lovász, 1993; Szummer & Jaakkola, 2001) and Bayes’ theorem are employed as follows: \[ w_{ij} = \text{similarity}(v_i, v_j) \propto p(v_i|v_j) = \sum_{k=1}^{K} p(v_i|v_k)p(v_k|v_j) = \sum_{k=1}^{K} z_{ik} \cdot \frac{z_{jk}}{\sum_{j'=1}^{K+N} z_{j'k}}. \] (2) We define \( p(v_i|v_k) = z_{ik} \) where \( 1 \leq i \leq K + N \) and \( 1 \leq k \leq K \). \( W \) is a symmetric matrix and \( w_{ij} = w_{ji} \). Given the diagonal matrix \( D \in \mathbb{R}^{K \times K} \) and \( D_{kk} = \sum_{i=1}^{K+N} z_{ik} \), the adjacency matrix \( W \) can be formulated as: \[ W = ZD^{-1}Z^\top. \] (3) Considering that prototypes are typically good representation of classes while neighboring samples may contain some noises, our proposed prototype graph is constructed by way of the prototypes, which is stable and effective. **Prototype Graph based Label Propagation.** After prototype graph construction, we propose to propagate labels (Zhou et al., 2003) by optimizing the following problem: \[ Y^* = \arg\min_Y \frac{1}{2} \left( \sum_{i,j=1}^{K+N} w_{ij} \| y_i - y_j \|_2^2 + \mu \sum_{i=1}^{K+N} \| y_i - z_i \|_2^2 \right). \] (4) \( Y = [y_1, y_2, ..., y_K, ..., y_{K+N}]^\top \in \mathbb{R}^{(K+N) \times K} \) is the optimization objective where \( y_i \) (\( 1 \leq i \leq K \)) represents the optimized labels of the prototypes and \( y_i \) (\( K + 1 \leq i \leq K + N \)) represents those of test samples as pseudo-labels. \( Y^* \) denotes the optimal solution. \( z_i \in \mathbb{R}^K \) is the \( i \)-th row in the initial label matrix \( Z \). The first term in Eq. 4 measures the smoothness constraint, which promotes smoothness by penalizing large differences between neighboring vertices in the graph. The second term is the fitting constraint, which measures the discrepancy between the optimized labels and the label assignment with regard to prototypes. The hyperparameter \( \mu > 0 \) controls the trade-off between the two terms. The optimal solution can be obtained by the following equation (refer to Sec. A.1 for more details): \[ Y^* = (1 - \lambda)(I - \lambda W)^{-1} Z = (1 - \lambda)(I - \lambda ZD^{-1}Z^\top)^{-1} Z, \] (5) where \( I \) is the identical matrix and \( \lambda = \frac{1}{1+\mu} \) (\( 0 < \lambda < 1 \)) is the hyperparameter to balance the two constraints. Finally, we extract the pseudo-labels of the test samples \( \hat{Y} \) by selecting the corresponding rows from the optimal solution \( Y^* \) and applying the softmax function: \[ \hat{Y} = \text{Softmax}(Y^*_{K+1:K+N}) = [\hat{y}_{K+1}, \hat{y}_{K+2}, ..., \hat{y}_{K+N}]^\top. \] **Re-initializing Prototypes.** In our online test setting, for each batch of input test data, we re-initialize the prototype set leveraging the linear layer weights of the source model. This design ensures that prototypes always keep up to date and maintain their representation of global class characteristics, while minimizing the memory burden associated with prototype updates. ### 3.4 ROBUST SELF-TRAINING (RST) To better utilize the pseudo-labels generated by PGM, we propose a method named Robust Self-Training (RST). Unlike the conventional approaches that directly use hard pseudo-labels as supervision, our method draws inspiration from FixMatch (Sohn et al., 2020) and combines both pseudo-labels and consistency regularization. Specifically, if the linear classifier and PGM produce the same predictions (i.e. identical results after applying argmax to the logits), we employ hard pseudo-labels. On the other hand, if they yield different predictions, we enforce consistency regularization by utilizing the Symmetric Cross Entropy (SCE) loss function (Wang et al., 2019): \[ L_{SCE} = -\sum_{k=1}^{K} \tilde{y}_{ik} \log p_{ik} - \sum_{k=1}^{K} p_{ik} \log \tilde{y}_{ik}, \] (6) Algorithm 1 PROototype GRAph Model based pseudo-label learning (PROGRAM) Input: Pre-trained source model \( h = f_\theta \circ g_\phi \), number of class \( K \), batch size \( N \), a test batch \( x \), cost balance coefficient \( \lambda \), loss balance term \( \beta \). Output: Model prediction \( y_{\text{pred}} \). 1. PGM generates reliable pseudo-labels: - Initialize prototypes: \( c_k = \frac{w_k}{\| w_k \|_2} \) - Prototype graph construction following Eq. [3]: \[ x'_i = f_\theta(x_i), s_{ik} = \frac{c_k \cdot x'_i}{\sum_{k=1}^{K} c_k \cdot x'_i}, T = I_K, Z = \text{Concat}(T, S), D_{kk} = \sum_{i=1}^{K+N} z_{ik} \] - Prototype graph label propagation following Eq. [5]: \[ Y^* = (I - \lambda)(I - \lambda Z D^{-1} Z^\top)^{-1} Z, \hat{Y} = \text{Softmax}(Y^*_{K+1:K+N}) \] 2. RST fine-tunes the pre-trained model with Eq. [7] to obtain fine-tuned model \( h_{ft} \). \[ L_{RST} = \sum_{i=1}^{N} \begin{cases} -\sum_{k=1}^{K} \arg \max y_i \log p_{ik} & \text{if } \arg \max p_i = \arg \max y_i \\ \beta(L_{SCE}) & \text{else} \end{cases} \] 3. Get final predictions: \[ y_{\text{pred}} = h_{ft}(x) \] Return: \( y_{\text{pred}} \) where \( p_i = [p_{i1}, p_{i2}, ..., p_{iK}]^\top \in \mathbb{R}^K \) represents the inferred output distribution of the \( i \)-th test sample, and \( \hat{y}_i = [\hat{y}_{i1}, \hat{y}_{i2}, ..., \hat{y}_{iK}]^\top \in \mathbb{R}^K \) denotes the pseudo-label generated by PGM. This loss function serves as a consistency loss function that exhibits robustness to noisy labels. Consequently, we formulate the final training objective function as follows: \[ L_{RST} = \sum_{i=1}^{N} \begin{cases} -\sum_{k=1}^{K} \arg \max y_i \log p_{ik} & \text{if } \arg \max p_i = \arg \max y_i \\ \beta(L_{SCE}) & \text{else} \end{cases} , \tag{7} \] where \( \beta \) is the loss balance term and we empirically set it as \( \beta = 0.4 \). This loss function enables all pseudo-labels to fine-tune the source model and sufficiently mitigates the negative impact of noisy pseudo-labels. Theoretical analysis is presented in Sec. A.3. Furthermore, the alternating use of the two loss functions helps prevent model collapse caused by consistency loss (Jing et al., 2021). 3.5 Algorithm Summary We summarize the pipeline of PROGRAM in Algorithm 1. During the testing phase, the adaptation is performed in an online manner. For each batch of test data, PGM first initializes prototypes, and constructs the prototype graph for label propagation. After that, RST is applied to fine-tune the pre-trained model. Finally, the fine-tuned model is used to produce prediction results. 4 Experiments 4.1 Experimental Setup Due to space limit, more details about the experimental setup including datasets (Sec. F.1), models (Sec. F.2), baselines (Sec. F.3) and implementation (Sec. F.4) are provided in Appendix. Datasets. We conduct experiments on four domain generalization benchmarks (i.e., VLCS (Torralba & Efros, 2011), PACS (Li et al., 2017), OfficeHome (Venkateswara et al., 2017), and TerraIncognita (Beery et al., 2018)) and three image corruption benchmarks (i.e., CIFAR-10/100C (Hendrycks & Dietterich, 2019) and Imagenet-C (Hendrycks & Dietterich, 2019)). For domain generalization datasets, we choose one domain as the target domain and the others as the source domains. For image corruption datasets, we select the original CIFAR-10/100 (Krizhevsky, 2009) and ImageNet (Krizhevsky et al., 2012) as the source domains. And the corrupted test data as the target domain. We follow the dataset splits as TAST (Yang et al., 2023) for a fair comparison. Models. In the main experiments, we compare different methods on ResNet-18/50 (He et al., 2016) backbones, which are widely used in domain adaptation and generalization community. Also, we Table 1: Comparisons with the state-of-the-art methods with average accuracy (%) on four domain generalization benchmarks. Avg. is the average performance of all the datasets. + indicates the ERM baseline combined with the respective TTA method. ↑ means higher is better, and * denotes the results from Jang et al., 2023. | Method | Backbone | VLCS ↑ | PACS ↑ | OfficeHome ↑ | TerraIncognita ↑ | Avg. ↑ | |--------|----------|--------|--------|--------------|-----------------|-------| | ERM* Chowdhary, 2020 | ResNet-18 | 74.88±0.46 | 79.29±0.77 | 62.10±0.31 | 40.62±1.19 | 64.22 | | +Tent* Wang et al., 2021a | ResNet-18 | 72.88±0.82 | 83.89±0.54 | 60.86±0.39 | 33.70±1.09 | 62.83 | | +TentAdapter* Wang et al., 2021a | ResNet-18 | 67.02±1.16 | 80.75±1.01 | 62.64±0.38 | 39.91±0.76 | 62.58 | | +TentClf* Wang et al., 2021a | ResNet-18 | 72.96±1.48 | 78.57±1.78 | 59.33±0.62 | 38.30±3.44 | 62.29 | | +SHOT* Liang et al., 2020 | ResNet-18 | 65.24±2.29 | 82.36±0.63 | 62.58±0.39 | 33.57±1.04 | 60.94 | | +SHOTIM* Liang et al., 2020 | ResNet-18 | 64.86±2.22 | 82.33±0.61 | 62.57±0.39 | 33.35±1.23 | 60.78 | | +PL* Lee et al., 2013 | ResNet-18 | 62.97±2.72 | 70.98±1.78 | 58.20±3.21 | 37.44±7.20 | 57.40 | | +PLClf* Lee et al., 2013 | ResNet-18 | 74.89±0.61 | 78.11±2.30 | 61.92±0.41 | 41.78±1.94 | 64.18 | | +T3A* Iwasawa & Matsuo, 2021 | ResNet-18 | 77.26±1.49 | 80.83±0.67 | 63.21±0.50 | 40.20±0.60 | 65.38 | | +TAST* Jang et al., 2023 | ResNet-18 | 77.27±0.67 | 81.94±0.44 | 63.70±0.52 | 42.64±0.72 | 66.39 | | +TAST-BN* Jang et al., 2023 | ResNet-18 | 75.21±2.36 | 87.07±0.53 | 62.79±0.41 | 39.43±2.24 | 66.13 | | +TSD* Wang et al., 2023 | ResNet-18 | 73.57±1.08 | 87.06±0.68 | 64.51±1.22 | 42.65±1.47 | 66.95 | | +PROGRAM | ResNet-18 | 77.75±1.37 | 88.03±0.74 | 64.59±0.85 | 43.16±1.12 | 68.38 | | Method | Backbone | CIFAR-10C ↓ | CIFAR-100C ↓ | ImageNet-C ↓ | Avg. ↓ | |--------|----------|-------------|--------------|--------------|-------| | No Adaptation* Chowdhary, 2020 | | 29.14 | 60.35 | 81.99 | 57.16 | | +SHOT* Liang et al., 2020 | | 15.32 | 41.54 | 58.27 | 38.38 | | +Tent* Wang et al., 2021a | | 13.95 | 39.04 | 58.06 | 37.02 | | +PL* Lee et al., 2013 | | 22.34 | 40.06 | 62.95 | 41.78 | | +T3A* Iwasawa & Matsuo, 2021 | | 26.68 | 58.28 | 75.81 | 53.59 | | +TAST* Jang et al., 2023 | | 26.61 | 60.74 | 75.81 | 53.59 | | +TAST-BN* Jang et al., 2023 | | 13.08 | 37.82 | 67.05 | 39.32 | | +TIPI Nguyen et al., 2023 | | 13.52 | 38.33 | 55.94 | 35.93 | | +TSD* Wang et al., 2023 | | 13.05 | 37.67 | 53.18 | 34.63 | | +PROGRAM | | 11.91 | 36.42 | 51.43 | 33.25 | Table 2: Comparisons with the state-of-the-art methods with average error rate (%) on image corruption benchmarks. Testing is conducted on the highest level of image corruption. All methods use ResNet-50 backbone. ↓ means lower is better. * denotes the results from Jang et al., 2023. We test our method on different backbones, including ViT-B/16 (Dosovitskiy et al., 2020), ResNeXt-50 (Xie et al., 2017), EfficientNet-B4 (Niu et al., 2022), and Mixer-L/16 (Poltikhin et al., 2021). Baselines. We compare PROGRAM with the following baselines: Empirical Risk Minimization (ERM) (Chowdhary, 2020), PL and PLClf (Lee et al., 2013), Tent, TentAdapter and TentClf (Wang et al., 2021a), SHOTIM and SHOT (Liang et al., 2020), T3A (Iwasawa & Matsuo, 2021), TAST and TAST-BN (Jang et al., 2023), TSD (Wang et al., 2023), and TIPI (Nguyen et al., 2023). For fair comparisons, all methods are based on online batch-level test data adaptation setting. Implementation. During the test phase, we use the Adam optimizer (Loshchilov & Hutter, 2017) to fine-tune the whole network parameters. We set the cost balance coefficient $\lambda = 0.5$ and loss balance term $\beta = 0.4$. We report the mean value and variance of results with four different random seeds \{0, 1, 2, 3\} and data splits on domain generalization benchmarks. 4.2 Experimental Results Domain Generalization Benchmarks. Table 1 shows the results with ResNet-18/50 backbones on four popular domain generalization datasets. Average accuracy (top-1) is reported. We follow Table 3: Effect of Prototype Graph Model (PGM) and Robust Self-Training (RST) on the domain generalization datasets. Average accuracy (%) is reported. | Method | VLCS ↑ | PACS ↑ | OfficeHome ↑ | Terralncognita ↑ | Avg. ↑ | |-----------------|--------|--------|--------------|------------------|-------| | ResNet-18 | 74.88±0.46 | 79.29±0.77 | 62.10±0.31 | 40.62±1.19 | 64.22 | | +PGM | 77.12±0.87 | 87.51±1.00 | 63.66±0.78 | 41.94±0.95 | 67.56 | | +PGM + RST | 77.75±1.37 | 88.03±0.74 | 64.59±0.85 | 43.16±1.12 | 68.38 | | Method | VLCS ↑ | PACS ↑ | OfficeHome ↑ | Terralncognita ↑ | Avg. ↑ | |-----------------|--------|--------|--------------|------------------|-------| | ResNet-50 | 76.71±0.50 | 83.21±1.14 | 67.13±0.99 | 45.93±1.34 | 68.25 | | +PGM | 77.72±0.83 | 90.45±0.37 | 68.23±0.78 | 46.92±1.04 | 70.83 | | +PGM + RST | 78.17±0.55 | 90.78±0.80 | 69.08±0.66 | 47.84±1.23 | 71.46 | Table 4: Applying Prototype Graph Model (PGM) to pseudo-labeling based approaches. Average accuracy (%) is reported on domain generalization benchmarks, and average error rate (%) is reported on image corruption benchmarks. ↑ means higher is better, while ↓ means lower is better. | Method | Domain Generalization Benchmark | Image Corruption Benchmark | |-----------------|---------------------------------|----------------------------| | | VLCS ↑ | PACS ↑ | OfficeHome ↑ | Terralncognita ↑ | Avg. ↑ | CIFAR-10C ↓ | CIFAR-100C ↓ | | T3A [Iwasawa & Matsuo, 2021] | 77.29±0.39 | 83.92±1.13 | 68.26±0.84 | 45.61±1.10 | 68.77 | 26.68 | 58.28 | | T3A + PGM | 78.51±0.57 | 85.43±1.28 | 68.92±1.25 | 46.26±1.38 | 69.78 (+1.01) | 25.15 (-1.53) | 57.06 (-1.22) | | TAST [Jang et al., 2023] | 77.66±0.48 | 84.11±1.22 | 68.63±0.70 | 47.43±2.09 | 69.46 | 26.61 | 60.74 | | TAST + PGM | 78.38±0.78 | 85.41±1.49 | 69.03±0.92 | 48.42±1.79 | 70.31 (+0.85) | 25.34 (-1.27) | 59.65 (-1.09) | | TAST-BN [Jang et al., 2023] | 73.52±1.37 | 89.16±0.47 | 68.88±0.50 | 41.47±2.88 | 68.26 | 13.08 | 37.82 | | TAST-BN + PGM | 74.56±1.74 | 90.49±0.72 | 69.47±1.02 | 42.09±2.53 | 69.15 (+0.89) | 12.44 (-0.64) | 36.79 (-1.03) | the common setting [Jang et al., 2023] to set the batch size as 32. In Table 1, we observe that our proposed PROGRAM consistently improves upon the ERM baseline and achieves the state-of-the-art performance on all the datasets. Our method improves ERM by 4.16% on average with ResNet-18 backbone and 3.21% with ResNet-50 backbone. In comparison, other TTA methods do not produce consistent improvement on all datasets. Full results are provided in Appendix H. Image Corruption Benchmarks. Image corruption benchmarks are designed to evaluate the robustness and generalization ability of a classifier on unseen corrupted samples which is pre-trained using clean data. In Table 2, we compare PROGRAM with other TTA methods on CIFAR-10C/100C and ImageNet-C benchmarks. Average error rate (top-1) is reported. ResNet-50 is used as the backbone of all the methods. We follow TAST [Jang et al., 2023] to set the test batch size as 128 on the CIFAR-10C/100C datasets and 64 on the ImagNet-C dataset. Experimental results show that our proposed PROGRAM reduces the average error by 23.91% on average, significantly outperforming all the other TTA methods. Please refer to Appendix H for the details of full results. 4.3 Analysis Different Batch Sizes. In Fig. 3, we report the average accuracy of various methods under different batch sizes on the PACS dataset with the ResNet-18 backbone. We observe that our approach outperforms other methods under different batch sizes. With the increase of batch size, the superiority of our proposed PROGRAM becomes more remarkable. More analysis can be found in Sec. D.1. Effect of Model Components. In Table 3, we conduct ablation studies to validate the effectiveness of the proposed Prototype Graph Model (PGM) and Robust Self-Training (RST) on the domain generalization datasets. Both PGM and RST consistently improve the performance of ResNet-18/50 backbones on all the datasets. Furthermore, we show that our PGM is “plug-and-play” for pseudo-label generation in Table 4. PGM is applied to typical pseudo-labeling based methods (i.e., T3A [Iwasawa & Matsuo, 2021] and TAST [Jang et al., 2023]) and replace their pseudo-label generation stage (refer to Appendix G for more details). We report the results with ResNet-50 backbone on domain generalization datasets and image corruption datasets. PGM improves T3A and TAST by a large margin, which demonstrates that PGM generates more reliable pseudo-labels compared with other pseudo-labeling approaches. More experiments can be found in Sec. D.2, D.3, and D.4. Different Backbones. In Table 5, we validated our method on various backbone architectures with the test batch size of 128 following TSD [Wang et al., 2023]. Experimental results show that our proposed PROGRAM consistently improves upon different model architectures, including ViT [Dosovitskiy et al., 2020], ResNeXt [Xie et al., 2017], EfficientNet [Tan & Le, 2019], and MLP- Table 5: Results (average accuracy) with different backbones. All baseline models are trained in a standard ERM manner. ↑ means higher is better, and * denotes the results from (Wang et al., 2023). | Backbone | PACS ↑ | OfficeHome ↑ | VLCS ↑ | |-------------------|--------|--------------|--------| | ViT/B/16* | 87.13 | 79.06 | 78.70 | | +TSD* (Wang et al., 2023) | 88.20 | 81.80 | 79.00 | | +PROGRAM* | 91.96 | 82.63 | 83.08 | | ResNetXt-50* | 86.67 | 72.66 | 78.50 | | +TSD* (Wang et al., 2023) | 91.33 | 74.18 | 79.38 | | +PROGRAM* | 92.84 | 74.89 | 82.24 | | EfficientNet-B4* | 85.11 | 74.65 | 77.14 | | +TSD* (Wang et al., 2023) | 86.84 | 72.24 | 79.42 | | +PROGRAM* | 86.78 | 74.71 | 82.35 | | Mixer-L/16* | 84.59 | 71.36 | 76.53 | | +TSD* (Wang et al., 2023) | 88.47 | 74.82 | 79.75 | | +PROGRAM* | 90.29 | 75.46 | 82.88 | Compared with TSD (Wang et al., 2023), PROGRAM achieves better performance across all the model architectures under the same setting. Runtime Analysis. In Table 6, we report the speed of TTA methods with ResNet-18 backbone on the PACS dataset, which is the average runtime of fine-tuning the model using a batch of test samples with batch size 32 on a Titan XP GPU. Compared with these methods that update the “whole” feature extractors (e.g., SHOT (Liang et al., 2020) and TSD (Wang et al., 2023)), the efficiency of PROGRAM is comparable. Please refer to Sec. C.6 for more details. We also present a partial-update variant of PROGRAM in Sec. D.6 to explore the trade-off between effectiveness and efficiency. | Method | Runtime (s) | |-------------------------|-------------| | TemTCF (Wang et al., 2021a) | 0.40 | | TemAdapter (Wang et al., 2021a) | 0.43 | | Tem (Wang et al., 2023) | 15.17 | | PL (Lever et al., 2017) | 0.43 | | PL (Lever et al., 2017) | 20.75 | | T3A (Iwasa et al., Matsuo, 2021) | 0.58 | | TAST (Dang et al., 2022) | 6.92 | | TAST-BN (Wang et al., 2023) | 73.93 | | SHOT (Liang et al., 2020) | 20.97 | | SHOT1st (Liang et al., 2020) | 20.73 | | TSD (Wang et al., 2023) | 19.73 | | PROGRAM | 17.92 | Figure 4: Sensitivity analysis regarding λ and β on PACS dataset. Accuracy of ERM baseline is shown with the dotted black lines. Our PROGRAM is robust to different hyperparameters. Sensitivity to hyper-parameters. We investigate the sensitivity of PROGRAM to the cost balance coefficient λ (cf. Eq. 5) and the loss balance term β (cf. Eq. 7). As shown in Fig. 4, our method is insensitive to the hyper-parameters and consistently improves upon the ERM baseline with different hyper-parameters. In our implementation, we choose λ = 0.5 and β = 0.4 for best performance. Qualitative Analysis. In Fig. 5, we visualize the t-SNE (Maaten & Hinton, 2008) of feature embeddings extracted by the fine-tuned models on CIFAR-10C dataset. The TSD (Wang et al., 2023) learned features of different categories are not well separated, and some test samples are hard to distinguish due to the large domain shift. In comparison, our method generates compact and discriminative feature embeddings with tight clusters, demonstrating the superiority of PROGRAM. 5 CONCLUSIONS In this work, we propose a novel pseudo-labeling based TTA method, termed PROotype GRAph Model based pseudo-label learning (PROGRAM). In the pseudo-label generation stage, we propose Prototype Graph Model (PGM) that combines the superiority of prototype based and nearest neighbor based methods to produce reliable pseudo-labels. In the self-training stage, Robust Self-Training (RST) is applied for test-time adaptation to resist noisy pseudo-labels. Extensive experiments demonstrate that our proposed PROGRAM consistently outperforms the existing methods on various domain generalization and image corruption benchmarks. Besides, we show that PROGRAM is “plug-and-play” and can be easily integrated into different backbone networks and various TTA methods. Acknowledgement. This paper is partially supported by the National Key R&D Program of China No.2022ZD0161000 and the General Research Fund of Hong Kong No.17200622. REFERENCES Eric Arazo, Diego Ortego, Paul Albert, Noel E O’Connor, and Kevin McGuinness. Pseudo-labeling and confirmation bias in deep semi-supervised learning. In *International Joint Conference on Neural Networks (IJCNN)*, pp. 1–8. IEEE, 2020. Philip Bachman, Ouais Alsharif, and Doina Precup. Learning with pseudo-ensembles. *Adv. Neural Inform. Process. Syst.*, 27, 2014. Sara Beery, Grant Van Horn, and Pietro Perona. Recognition in terra incognita. In *Eur. Conf. Comput. Vis.*, pp. 456–473, 2018. Malik Boudiaf, Romain Mueller, Ismail Ben Ayed, and Luca Bertinetto. Parameter-free online test-time adaptation. In *IEEE Conf. Comput. Vis. Pattern Recog.*, pp. 8344–8353, 2022. Sungha Choi, Seunghan Yang, Seokeon Choi, and Sungrack Yun. Improving test-time adaptation via shift-agnostic weight regularization and nearest source prototypes. In *Eur. Conf. Comput. Vis.*, pp. 440–458, 2022. K. R. Chowdhary. Statistical learning theory. In *Fundamentals of Artificial Intelligence*, pp. 415–443, 2020. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *IEEE Conf. Comput. Vis. Pattern Recog.*, pp. 248–255, 2009. Mario Döbler, Robert A Marsden, and Bin Yang. Robust mean teacher for continual and gradual test-time adaptation. In *IEEE Conf. Comput. Vis. Pattern Recog.*, pp. 7704–7714, 2023. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In *Int. Conf. Mach. Learn.*, pp. 647–655. PMLR, 2014. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Geoffrey French, Michal Mackiewicz, and Mark Fisher. Self-ensembling for visual domain adaptation. *arXiv preprint arXiv:1706.05208*, 2017. Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. *arXiv preprint arXiv:1803.07728*, 2018. Taesik Gong, Jongheon Jeong, Taewon Kim, Yewon Kim, Jinwoo Shin, and Sung-Ju Lee. Note: Robust continual test-time adaptation against temporal correlation. *Adv. Neural Inform. Process. Syst.*, 35:27253–27266, 2022. Raghuraman Gopalan, Ruohan Li, and Rama Chellappa. Domain adaptation for object recognition: An unsupervised approach. In *Int. Conf. Comput. Vis.*, pp. 999–1006. IEEE, 2011. Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. *arXiv preprint arXiv:2007.01434*, 2020. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *IEEE Conf. Comput. Vis. Pattern Recog.*, pp. 770–778, 2016. Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. *Int. Conf. Learn. Represent.*, 2019. Yusuke Iwasawa and Yutaka Matsuo. Test-time classifier adjustment module for model-agnostic domain generalization. *Adv. Neural Inform. Process. Syst.*, 34:2427–2440, 2021.
WM5G2NWSYC
This work often leverages metalearning as a driving motivator. But how crucial is a Meta-Learning approach in light of works such as [7,8] which highlight limited importance of a full meta-learning objective for few-shot transfer?
Projected Subnetworks Scale Adaptation Anonymous authors Paper under double-blind review Abstract Large models support great zero-shot and few-shot capabilities. However, updating these models on new tasks can break performance on previous seen tasks and their zero/few-shot unseen tasks. Our work explores how to update zero/few-shot learners such that they can maintain performance on seen/unseen tasks of previous tasks as well as new tasks. By manipulating the parameter updates of a gradient-based meta learner as the projected task-specific subnetworks, we show improvements for large models to retain seen and zero/few shot task performance in online settings. 1 Introduction The adaptation of deep neural networks have practical importance. It enables models to adapt to varying test-time distributions, attributed to shifts in time, person, environment, etc. The more difficult adaptation cases arise when there may be no clear task boundaries, when the task was not seen during training, and only few/zero samples are available to update a model. To tackle adaptation broadly, given a base learner optimizing its inner objective with respect to its assigned task, a meta learner computes the update to the base learner such that it optimizes its outer objective across a distribution of tasks (Hospedales et al., 2021). Scaling the size of models and training data have recently demonstrated comparable zero/few-shot capabilities (e.g. GPT-3 (Brown et al., 2020), Chinchilla (Hoffmann et al., 2022)). Retaining this zero/few-shot capability becomes a challenge in an online setting. Prior continual learning methods (Lange et al., 2019) aim to retain performance on both prior and subsequent tasks, but do not evaluate the retention of zero/few-shot task performance. Ilharco et al. (2022) proposed an online algorithm that fine-tunes a large vision-language model on a new task, and performs well on the previous zero/few-shot tasks and the seen fine-tuned task. Task-specific representations within a large model can be difficult to disentangle and manipulate. Identifying and freezing subnetworks (e.g. APD (Yoon et al., 2020), WSN (Kang et al., 2022)) can help mitigate forgetting. A meta learner projects its representations onto a base parameter space. For a gradient-based meta learner, the meta learner and base parameters reside in the same parameter space. By optimizing meta parameters alike to gradient-based meta learning, we can project the task-specific representations (subnetworks) in the meta parameters to interpolatable, equidimensional base parameters (subnetworks) in one parameter space. Our proposed method, Subnetwork Projection (SNP), trains a meta learner to maximize the distance that the meta parameters can drift while returning the same base parameters. SNP++ also stores a memory buffer to access and manipulate the base parameters. Contributions. Subnetwork Projection (SNP) is the first continual learner designed to retain seen and unseen zero/few-shot performance on both prior and subsequent tasks, outperforming existing baselines. By projecting subnetworks as equidimensional base parameters in the same space, SNP trains a model to sustain greater parameter drift while still retaining access to the original base parameters. With task-specific samples, SNP++ can manipulate subnetworks encoded in the meta parameters, including adding, removing, combining, or switching subnetworks. 2 Related Work Discrete representations. Identifying and freezing task-specific subnetworks can minimize forgetting on prior tasks (Yoon et al., 2020; Kang et al., 2022). A concern with this discrete representation is its mutability. Once a subnetwork is identified and frozen, it cannot be transformed or switched to a Table 1: Measuring the cosine distance between flattened parameters fine-tuned on each dataset against each other. | Dataset | MSCOCO | ImageNet | CIFAR100 | STL10 | Caltech101 | StanfordCars | Flowers102 | GTSRB | Food101 | EuroSAT | FGVC/Aircraft | |---------------|--------|----------|---------|-------|------------|--------------|-----------|-------|---------|---------|----------------| | MSCOCO | 0.0000 | 0.0242 | 0.0241 | 0.0253| 0.0242 | 0.0352 | 0.0064 | 0.0025| 0.0298 | 0.0052 | 0.0332 | | ImageNet | 0.0242 | 0.0000 | 0.0437 | 0.0464| 0.0456 | 0.0572 | 0.0298 | 0.0264| 0.0510 | 0.0289 | 0.0555 | | CIFAR100 | 0.0241 | 0.0437 | 0.0000 | 0.0471| 0.0452 | 0.0577 | 0.0300 | 0.0262| 0.0508 | 0.0288 | 0.0558 | | STL10 | 0.0253 | 0.0464 | 0.0471 | 0.0000| 0.0470 | 0.0588 | 0.0311 | 0.0276| 0.0536 | 0.0301 | 0.0570 | | Caltech101 | 0.0242 | 0.0456 | 0.0457 | 0.0476| 0.0000 | 0.0574 | 0.0298 | 0.0263| 0.0519 | 0.0290 | 0.0556 | | StanfordCars | 0.0242 | 0.0456 | 0.0457 | 0.0476| 0.0000 | 0.0574 | 0.0000 | 0.0401| 0.519 | 0.0290 | 0.0545 | | Flowers102 | 0.0064 | 0.0298 | 0.0300 | 0.0311| 0.0298 | 0.0405 | 0.0000 | 0.0087| 0.0352 | 0.0113 | 0.0386 | | GTSRB | 0.0025 | 0.0264 | 0.0262 | 0.0276| 0.0263 | 0.0372 | 0.0087 | 0.0000| 0.0318 | 0.0075 | 0.0352 | | Food101 | 0.0298 | 0.0510 | 0.0508 | 0.0536| 0.0519 | 0.0622 | 0.0352 | 0.0318| 0.0000 | 0.0340 | 0.0602 | | EuroSAT | 0.0052 | 0.0289 | 0.0288 | 0.0301| 0.0290 | 0.0396 | 0.0113 | 0.0075| 0.0340 | 0.0000 | 0.0379 | | FGVC/Aircraft | 0.0332 | 0.0555 | 0.0558 | 0.0570| 0.0556 | 0.0645 | 0.0386 | 0.0352| 0.0602 | 0.0379 | 0.0000 | Table 2: Measuring the (Zero-shot Top-5 / Few-shot Top-1) accuracy between parameters fine-tuned on each dataset against each other. We baseline against the pretrained initialization trained on WIT, the finetuned model on MSCOCO (which was then used as the starting point for each subsequently-finetuned model). | Model trained on | MSCOCO | ImageNet | CIFAR100 | STL10 | Caltech101 | StanfordCars | Flowers102 | GTSRB | Food101 | EuroSAT | FGVC/Aircraft | |------------------|--------|----------|---------|-------|------------|--------------|-----------|-------|---------|---------|----------------| | Pretrained init | 23.1/47.0 | 83.5/86.9 | 69.7/82.8 | 99.7/94.0 | 85.3/76.0 | 82.6/75.2 | 84.0/90.1 | 56.0/30.6 | 86.7/74.6 | 76.0/69.0 | 44.0/32.2 | | MSCOCO | 93.7/91.2 | 8.6/10.9 | 11.3/10.8 | 95.3/55.2 | 8.1/43.4 | 2.7/7.2 | 7.0/55.1 | 17.2/16.6 | 16.4/12.3 | 54.3/52.4 | 5.4/7.8 | | ImageNet | 9.8/20.8 | 9.1/89.9 | 14.0/11.3 | 95.5/59.4 | 7.8/9.7 | 3.1/7.2 | 7.1/57.8 | 14.9/12.8 | 14.4/10.9 | 57.4/56.4 | 4.9/8.0 | | CIFAR100 | 4.9/19.8 | 8.3/82.7 | 7.1/7.4 | 99.7/91.6 | 3.8/4.8 | 2.9/5.3 | 4.9/27.2 | 10.8/11.3 | 6.1/4.2 | 50.5/51.1 | 5.0/5.7 | | STL10 | 6.0/39 | 2.6/32 | 7.1/7.4 | 99.7/91.6 | 2.9/5.3 | 2.9/7.4 | 5.4/43.7 | 12.5/13.8 | 5.6/6.3 | 53.9/54.0 | 4.6/3.0 | | Caltech101 | 7.2/8.2 | 1.8/4.8 | 8.3/9.2 | 86.8/42.9 | 98.5/98.4 | 2.9/7.4 | 6.3/47.8 | 10.8/15.8 | 6.7/5.4 | 53.9/54.0 | 4.8/12.6 | | StanfordCars | 1.7/9.5 | 1.9/7.1 | 6.5/7.5 | 68.0/43.2 | 3.1/5.0 | 94.0/95.7 | 6.3/47.8 | 9.5/15.2 | 6.0/7.0 | 52.3/53.3 | 3.8/8.4 | | Flowers102 | 1.4/6.3 | 3.7/7.1 | 5.9/4.4 | 72.9/40.7 | 2.1/2.5 | 2.5/5.3 | 7.0/43.6 | 99.5/97.8 | 6.5/6.3 | 44.4/53.2 | 4.8/8.8 | | GTSRB | 23.4/5.1 | 1.8/6.0 | 14.0/11.0 | 72.9/40.7 | 7.0/11.1 | 2.5/5.3 | 7.0/43.6 | 99.5/97.8 | 6.5/6.3 | 44.4/53.2 | 4.8/8.8 | | Food101 | 27.2/5.2 | 1.4/6.0 | 6.8/10.9 | 69.1/37.4 | 5.8/5.5 | 2.5/5.8 | 5.0/52.4 | 7.9/12.6 | 9.3/2.7 | 48.9/58.8 | 5.7/6.6 | | EuroSAT | 35.0/3.2 | 6.3/1.9 | 6.8/5.8 | 67.2/28.6 | 9.7/16.8 | 2.7/6.9 | 6.3/8.5 | 11.7/9.9 | 4.3/2.7 | 97.3/95.8 | 5.3/2.4 | | FGVC/Aircraft | 4.2/4.0 | 1.7/5.0 | 7.6/7.1 | 61.1/29.6 | 4.1/8.6 | 2.3/3.2 | 6.2/44.7 | 12.9/12.0 | 8.5/6.2 | 55.2/56.7 | 80.5/99.3 | Different subnetwork if its source data is unavailable. This would be needed if most of the network is frozen and no leftover nodes are available for a new task. Task order affects the subnetwork arrangement, and changes in unfrozen subnetworks values may render the frozen subnetwork inaccurate. Network capacity would also be fixed; once all the nodes are frozen, a subnetwork would need to be removed in order for a new task to be learnt. Interpolation between subnetworks may not be viable due to different shapes. Subnetworks could also be constructed as stitchable layers of regular shapes (e.g., linear layer), such as in model stitching (Csiszárk et al., 2021; Bansal et al., 2021) or feature adapters (Gao et al., 2021; Chen et al., 2022). These layers would need to be compatible and conditioned on the previous layers. Networks can also be modularly generated from architectural components (Andreas et al., 2016a,b). Continuous representations. Instead of manipulating discrete, composable modules in neural networks, the weights/parameters of the network can be modified. Combining representations is a common technique, aiding in leveraging transferable properties between tasks as well as conserving capacity. Regularization-based continual learning strategies, such as EWC (Kirkpatrick et al., 2017) and SI (Zenke et al., 2017), use regularization terms to update parameters towards the new task while retaining pertinent representations of the prior task. Model merging and averaging has also been used to improve generalizability, robustness, and adaptability in online learning settings (Singh & Jaggi, 2019; Matena & Raffel, 2021; Wortsman et al., 2022; Ilharco et al., 2022). Other than interpolatability, residing in a continuous space enables these representations to be dynamically-generated. Meta learners (e.g., MAML (Finn et al., 2017b), hypernetworks (Ha et al., 2016)) query a few samples from the task to compute the updated base parameters. Large models can support similar few-shot capabilities with parameter-efficient fine-tuning, such as prompt tuning (Sanh et al., 2022; Wei et al., 2022a,b; Zhou et al., 2022). Table 3: Empirical equivalence between CLIP and MAML(CLIP). Both networks have the same number of parameters and architecture, but vary by training optimization procedure. They can both be trained to attain comparable (Zero-shot Top-5 / Few-shot Top-1) accuracy. | Dataset | ImageNet (N=1000) | CIFAR100 (N=100) | STL-10 (N=10) | Caltech101 (N=102) | Stanford Cars (N=196) | Flowers102 (N=102) | GTSRB (N=43) | Food101 (N=101) | EuroSAT (N=10) | FGVC Aircraft (N=100) | |---------------|-------------------|------------------|--------------|--------------------|-----------------------|---------------------|-------------|----------------|----------------|----------------------| | CLIP | 93.7 / 91.2 | 8.6 / 10.9 | 11.3 / 10.8 | 95.3 / 55.2 | 8.1 / 43.4 | 2.7 / 7.2 | 7.0 / 55.1 | 17.2 / 16.6 | 16.4 / 12.3 | 54.3 / 52.4 | | MAML(CLIP) | 94.6 / 95.8 | 9.8 / 12.1 | 13.3 / 14.6 | 99.1 / 50.4 | 10.8 / 37.8 | 2.3 / 6.4 | 7.4 / 68.5 | 19.2 / 23.3 | 11.4 / 16.6 | 52.8 / 57.6 | 3 GROUNDING SUBNETWORK PROJECTION IN META LEARNERS Problem Setup. From a task set \( T = \{T_t\}_{t \in T} \), a base learner receives \( T \) tasks sequentially. \( T_t = \{x_t, y_t\} \) denotes the dataset of the \( t \)-th task. In the online/continual learning setting, given loss function \( L \), a base learner \( f(\theta_{base}; x) \) optimizes its parameters \( \theta_{base} \) such that it can perform well on the \( t \)-th task while minimizing performance drop on the previous \( (t - 1) \) tasks. We further add the requirement of retaining the zero/few-shot performance of unseen tasks \( V_t = \{V_{t,v}\}_{v \in V} \) of the \( t \)-th seen task. Hence, the objective is: \[ \theta^*_{base} := \arg\min_{\theta_{base}} \sum_{t=1}^{T} [L(f(\theta_{base}; x_t), y_t) + \sum_{v=1}^{V} L(f(\theta_{base}; x_v), y_v)]. \] We measure drift between two parameters with the distance function \( \text{dist}(\theta_0, \theta_1) \). The experimental setup is described in Section 6.1. Algorithm 1 base_params ``` 1: procedure base_params(θ, T = {T_t}_{t \in T}, K, lr_base) 2: for T_t in T do 3: for X^K_t, Y^K_t in T_t do 4: θ_{base,t} = θ − lr_base ∂L(θ; X^K_t, Y^K_t) / ∂θ 5: return {θ_{base,t}}_{t \in T} ``` 3.1 TASK GROUPINGS The pre-trained model was trained on WebImageText (WIT) (Radford et al., 2021), a dataset specially gathered with 400M (image, text) pairs, in contrast to 100K in MSCOCO (Lin et al., 2014). From the pre-trained initialization, we train on the first task of MSCOCO for 50 epochs, until the accuracy is high on MSCOCO and low for all the unseen datasets that the pre-trained model performed well on. We reuse 10 datasets used in CLIP’s evaluation including ImageNet (Deng et al., 2009), CIFAR100 (Krizhevsky, 2009), STL-10 (Coates et al., 2011), Caltech-101 (Fei-Fei et al., 2004), Stanford Cars (Krause et al., 2013), Oxford Flowers 102 (Nilsback & Zisserman, 2008), GTSRB (Stallkamp et al., 2012), Food-101 (Bossard et al., 2014), EuroSAT (Helber et al., 2017), and FGVC Aircraft (Maji et al., 2013). While MSCOCO contains (image, caption) pairs, the other datasets contain (image, label) pairs. Hence, with the prompt templates provided by (Radford et al., 2021), for each dataset (e.g. “a photo of a CLASS”, we can convert labels to captions. While the pre-trained initialization performs well across the 11 datasets, the MSCOCO-finetuned model loses many transferable representations from WIT, such that the average zero/few-shot accuracy is low. MSCOCO is on a much smaller scale than WIT, in terms of number of images (100K vs 400M), range of classes (e.g. ImageNet labels such as stingray, tarantula, mousetrap are not found in MSCOCO), and less diverse images (e.g. MSCOCO contains natural and day-to-day scenes, while WIT contains natural scenes, sketches, blurry images, low-res images, texts, websites). The lifelong goal is to gradually increase the average zero/few-shot accuracy across all tasks. Given the range of datasets, we evaluate zero/few-shot transferability between them, such that learning one dataset will yield high zero/few-shot performance on another dataset. First, we fine-tuned CLIP on each dataset from a MSCOCO-finetuned initialization. In Table 1, we measured the cosine distance between each pair of models fine-tuned on two different datasets. This infers the spatial distance in the parameter space, which parameters are closer to each other, and which parameters require further gradient updates from the initialization. We are able to identify 3 sets of datasets, grouped by distance: (i) \( \leq 0.1 \), (ii) \( 0.1 – 0.3 \), (iii) \( \geq 0.3 \). In Table 2, we evaluate the functional distance of each fine-tuned model by computing the zero/few-shot performance. Table 4: After drifting the meta parameters to each task’s base parameters, the newly-computed base parameters for each task has minimal drift with respect to the original base parameters. | Computed base parameters | Distance w.r.t. base parameters of task=1 as meta parameters | Distance w.r.t. base parameters of task=2 as meta parameters | Distance w.r.t. base parameters of task=3 as meta parameters | Distance w.r.t. base parameters of task=4 as meta parameters | Distance w.r.t. base parameters of task=5 as meta parameters | |--------------------------|-------------------------------------------------------------|-------------------------------------------------------------|-------------------------------------------------------------|-------------------------------------------------------------|-------------------------------------------------------------| | Task 1 | 0.1827 | 0.0283 | 0.1370 | 0.1810 | 0.1319 | | Task 2 | 0.1426 | 0.0283 | 0.1370 | 0.1810 | 0.1385 | | Task 3 | 0.1667 | 0.1368 | 0.1008 | 0.1798 | 0.1511 | | Task 4 | 0.1439 | 0.1600 | 0.1341 | 0.0181 | 0.0942 | | Task 5 | 0.1338 | 0.1336 | 0.1238 | 0.1164 | | per model on each dataset. Based on the relational analysis between seen and unseen tasks, given the distance between models fine-tuned on each dataset (relative cosine distance in the parameter space), and given the zero/few-shot performance of each fine-tuned model, some clear task groupings can be identified. The order for the seen (unseen) tasks is: MSCOCO (FGVC Aircraft, EuroSAT, Food101) → ImageNet (STL10, StanfordCars) → Caltech101 (CIFAR100, GTSRB, Flowers102). ### 3.2 Disentangling Model Representations as Base Parameters Meta learners are trained to develop zero/few-shot capabilities. Given meta parameters, some meta learners map input distributions to base parameters. The base parameters are task-specific representations, and dynamically generated with respect to the task. In the case of gradient-based meta learners, the meta parameters and base parameters reside in the same parameter space. A gradient is computed with respect to the new task’s input distribution and applied to the meta parameters, and this returns the base parameters. As a result, we can project the task-specific representations and subnetworks within the meta parameters to the base parameter space, and use a gradient-based meta learner to retain the same model architecture and output space as training a model without a meta learning training procedure. We train CLIP with MAML’s (Finn et al., 2017a) training procedure for 10,000 epochs, meta learning rate and base learning rate of 0.0005. To retain the same scale of model and data, we use the same CLIP architecture and capacity, and retain the same dataset size by training MAML(CLIP) on Split-MSCOCO (Chiaro et al., 2020) (which organizes labelled MSCOCO into 5 tasks: transport, animals, sports, food, interior). We train MAML(CLIP) until it attains similar zero/few-shot performance as CLIP (Table 3). ### 3.3 Drift in a Meta Parameter’s Subspace When updating the meta learning parameters $\theta$ on a new task, $\theta$ may drift by some distance to $\theta'$. Given base parameters $\{\theta_{base,t}\}_{t \in T}$ and $\theta$ reside in the same parameter space, we evaluate how far $\theta$ can drift while returning the same (or within bounded error $\varepsilon$) base parameters $\text{dist}(\theta_{base,t}, \theta'_{base,t}) \leq \varepsilon$. From Table 5, we first measure the Euclidean distance between the original meta parameters and their MAML-computed base parameters. This informs the approximate radius of the parameter subspace. In Table 6, we test whether drifting the meta parameter to each task’s base parameter $\theta = \theta_{base,t}$ will still be able to compute the same task base parameters. Relative to the subspace radius, we find that the base parameters can indeed be re-located if the meta parameter is drifted to the end-points of the subspace. Given that the base parameters can be located if the drift is within the bounds of the subspace, we next evaluate whether the base parameters can be located if the drift exceeds the bounds of the subspace. In Table 6, we evaluate $S = 1000$ random parameters, and interpolate between this random parameter $\theta_{rand,s}$ and the meta parameter $\theta$ to return an interpolated meta parameter $\theta_{int} = (1 - r)\theta + r\theta_{rand,s}$ with interpolation coefficient $r$. We find that for varying interpolation coefficients (and thus varying Euclidean distances), once the Euclidean distance increases substantially, then the computed base parameters drift in a similar fashion from the original base parameters. As a result, we are interested in maximizing the radius of the parameter subspace in which the meta parameter can drift, while returning the same base parameters (within bounded error). Table 5: Measuring the Euclidean distance between the original meta parameters of MAML(CLIP) and each task’s base parameters is indicative of the subspace radius. | Computed base parameters | Distance w.r.t. meta parameters | |--------------------------|--------------------------------| | Task 1 | 0.388 | | Task 2 | 0.411 | | Task 3 | 0.340 | | Task 4 | 0.420 | | Task 5 | 0.310 | Algorithm 2 adaptive_beta 1: procedure adaptive_beta($\beta_{meta}$, dist_meta, v, $\varepsilon$, \{dist_meta,s,r\}$^s,r\in S,I,T,K,l_{rbase}$) 2: if $\varepsilon = None$ then 3: $\{\theta_{base,t}\} \leftarrow base\_params(\theta, T, K, l_{rbase})$ 4: dist_list = \{\} 5: for $\theta'$ in \{$\theta_{base,t}$\} do 6: $\{\theta'_{base,t}\} \leftarrow base\_params(\theta', T, K, l_{rbase})$ 7: dist_list $\leftarrow \frac{1}{T} \sum_{t}^{T} dist(\theta'_{base,t}, \theta_{base,t})$ 8: $\varepsilon = \max(dist\_list)$ 9: dist_meta := arg max$_{s,r}[dist_{meta,s,r}|dist_{base,s,r} \leq \varepsilon]$ 10: $\beta_{meta} = \max(\beta_{meta}, \beta_{meta} \times \frac{dist_{meta}-dist_{meta,v}}{dist_{meta}-dist_{meta,v}})$ 11: return $\beta_{meta}$ Algorithm 3 train_space 1: procedure train_space(T = \{T_t\}_{t\in T}, K = 5, epochs = 10,000, l_{rbase} = 0.0005, l_{rmeta} = 0.0005, $\beta_{meta} = 0.5$, $\beta_{base} = \{\beta_{base,t} = 0.5\}_{t\in T}$, S = 1,000, I = [0.0001, 0.001, 0.01, 0.1], M = False or \{\}) 2: $\theta \leftarrow \theta_{init}$ 3: if M $\neq$ False then 4: M $\leftarrow \{X^K_t,Y^K_t\}_{t\in T}$ ▷ optional: store memory 5: for epoch in epochs do 6: $\{\theta_{base,t}\} \leftarrow base\_params(\theta, T, K, l_{rbase})$ 7: for $T_t$ in T do 8: for $X_t, Y_t$ in $T_t$ do 9: $L_{T_t} = L(\theta - l_{rbase} \frac{\partial L(\theta; X_t, Y_t)}{\partial \theta}; X_t, Y_t)$ 10: for s in S do 11: $\theta_{rand} \leftarrow \theta_{init}$ 12: for r in I do 13: $\theta_{im} = (1-r)\theta + r\theta_{rand}$ 14: dist_meta,s,r = dist($\theta, \theta_{init}$) 15: $\{\theta'_{base,t}\} \leftarrow base\_params(\theta_{init}, T, K, l_{rbase})$ 16: dist_base,s,r = $\sum_{t}^{T} dist(\theta_{base,t}, \theta'_{base,t})$ 17: $L_{meta} = \sum_{s}^{S} \sum_{r}^{I} dist_{meta,s,r}$ 18: $L_{base} = \sum_{s}^{S} \sum_{r}^{I} dist_{base,s,r}$ 19: $\theta := \theta - l_{rmeta} \sum_{t}^{T} \frac{\partial L_{T_t}}{\partial \theta} - \beta_{meta} \frac{\partial L_{meta}}{\partial \theta} - \beta_{base} \frac{\partial L_{base}}{\partial \theta}$ 20: return $\theta, M$ 4 Subnetwork Projection (SNP): Expand Projected Subspace to Support Drift Given a model architecture, we can alter the training procedure to one of gradient-based meta learning and project the subnetworks onto a base learner’s parameter space. In the standard implementation, we assume no memory $M = False$. We cannot access subnetworks, but we can regulate the training of the meta parameters such that we can maximize the radius of the parameter subspace in which the meta parameter can drift, while returning the same base parameters within bounded error (Algorithms 3,4). Referring to Algorithm 3 per epoch, after computing the support set loss w.r.t. computed base parameters, we compute a set of distance regularization terms. Our selected distance function dist is cosine distance. We sample interpolated meta parameters at varying distances from the current epoch’s meta parameters, and compute the cumulative drift in meta parameters against randomly-sampled distant parameters, we find that the closer the interpolated meta parameters, the closer the Euclidean distance, and within certain limits of drift, the base parameters would drift minimally. | Interpolation coefficient | Meta parameter drift | Base parameter drift (avg across tasks) | |--------------------------|----------------------|----------------------------------------| | 0.001 | 0.327 | 0.0859 | | 0.1 | 33 | 33 | | 1 | 330 | 330 | Algorithm 4 expand_space 1: procedure expand_space(θ, V = {V_v}v∈V, K = 5, epochs = 500, lr_base = 0.0005, lr_meta = 0.0005, β_meta = 0.5, β_base = {β_base,v = 0.5}v∈V, β_int = {β_int,v = 1.0}v∈V, M = False or {X^K_t,Y^K_t}t∈T, ε = 0.001 or None) 2: if M ≠ False then ▷ optional: access subnetworks 3: for X^K_t,Y^K_t in M do 4: θ_base,t := θ − lr_base ∂L(θ;X^K_t,Y^K_t) ∂θ 5: for V_v in V do 6: for epoch in epochs do 7: {θ_base,v} ← base_params(θ, V, K, lr_base) 8: if M ≠ False then 9: if β_int,v > 0 then 10: for θ_base,v in {θ_base,v} do 11: g := arg min_g∈T dist(θ_base,v, θ_base,g) 12: for X_v,Y_v in V_v do 13: L_v = L(θ − lr_base ∂L(θ;X_v,Y_v) ∂θ;X_v,Y_v) 14: dist_meta,v = dist(θ − lr_meta ∑_v ∂L_v ∂θ) 15: if M ≠ False then 16: for X^K_t,Y^K_t in M do 17: if β_base,t > 0 then 18: dist_base,t = dist(θ_base,t, θ − lr_base ∂L(θ;X^K_t,Y^K_t) ∂θ) 19: if M ≠ False then ▷ optional: interp./remove 20: if β_int,v > 0 then 21: X^K_g,Y^K_g ← M 22: X^K_v,Y^K_v ← V_v 23: dist_int = dist(θ − lr_base ∂L(θ;X^K_v,Y^K_v) ∂θ; θ − lr_base ∂L(θ;X^K_g,Y^K_g) ∂θ) 24: β_meta ← adaptive_beta(β_meta, dist_meta,v, ε) 25: L_meta = dist_meta,v, L_int = dist_int 26: L_base = ∑_t dist_base,t 27: θ := θ − lr_meta ∑_v ∂L_v ∂θ − β_meta ∂L_meta ∂θ − β_base ∂L_base ∂θ − β_int,g ∂L_int ∂θ 28: if M ≠ False then 29: M ← {X^K_t,Y^K_t}t∈V 30: return θ, M parameters and base parameters. With these loss terms, we update the meta parameters. In an online setting (Algorithm 4), we perform distance regularization on the meta parameters (but not the base parameters, as M = False). Given knowledge of the subspace radius from the training procedure, while we measure the drift of the meta parameters, we are informed on when the base parameters error will increase (e.g. exceeding the radius). As such, we make use of an adaptive regularization coefficient procedure (Algorithm 2): when meta parameters are closer to the end of the supported radius, the distance regularization coefficient will increase accordingly. 5 SNP++: MEMORY-BASED SUBNETWORK ACCESS AND MANIPULATION To query a subnetwork, we need task-specific data, in-line with prior subnetwork literature. Unlike replay-based methods, we do not store extensive replay buffers; instead, the memory buffer is one instance of a N-way-K-shot support set for computing base parameters. The use of this task-specific support introduces various interesting properties for manipulating the model. As the support set varies, we can map each input distribution to a unique subnetwork. As such, we have a continuous space of subnetworks. In the standard case, we intend to add new subnetworks. First we use the previous training procedure to maximize the subspace radius. For each new task, we can fine-tune our meta parameters w.r.t. the new dataset, while using the memory buffer to track and regularize the drift. of each individual base parameter. Unlike SNP, we regularize both the drift in the main parameters as well as each base parameter. Table 7: Moving from pre-trained initialization, to Task 1-3, we present the (Zero-shot Top-5 / Few-shot Top-1) accuracy across each task and baseline method. We also measure backward transfer (BWT) (Lopez-Paz & Ranzato, 2017), which is the influence that learning a new task has on the performance on a previous task. Positive backward transfer occurs when learning a new task increases the performance on a preceding task. Negative backward transfer occurs when learning about a new task decreases the performance on a preceding task. Rather than computing a general BWT score, for those tasks that are greater than or equal to its compared value, we compute the average positive BWT. For those tasks that are less than its compared value, we compute the average negative BWT. This helps us measure positive transfer as well as drawdown. For Task 3, we evaluate each method against their method’s task 2 performance; otherwise, each task’s method is evaluated against the previous task’s fine-tuning performance. Extending further from subnetwork addition, we can also evaluate subnetwork removal, combining (or interpolating between) subnetworks, and even switching subnetworks to alternate subnetwork modes. For subnetwork removal, we can choose not to freeze/regularize a specific task’s subnetwork (e.g. setting its regularization coefficient to be less then 1.0 for partial removal, or even setting to 0.0 to ignore its regularization). It does not actively remove the subnetwork, but it also does not actively preserve it. A use case is if a particular subnetwork causes interference, or if the capacity is needed for another task. In these cases, a new task’s base parameter can overwrite this base parameter. For interpolating between subnetworks, other than adding a new subnetwork entirely, we can save capacity and allow one task’s base parameters to be used in multiple tasks. We can first evaluate which Table 8: Variations in hyperparameters and subnetwork manipulation strategies with SNP(++). | Task 1: MSCOCO | Fine-tuning | Fine-tuning | EWC | BatchEnsemble | GPM | CLIP-Adapter | PAINT | SNP | SNP++ (Add.) | SNP++ (Inter.) | |----------------|-------------|-------------|-----|---------------|-----|-------------|-------|-----|--------------|--------------| | ImageNet (N=1000) | 96.4 / 97.4 | 96.4 / 97.4 | 74.7 / 78.6 | 80.5 / 81.7 | 88.2 / 87.1 | 90.9 / 90.3 | 88.2 / 87.1 | 90.9 / 90.3 | 88.2 / 87.1 | | CIFAR100 (N=100) | 83.2 / 83.2 | 83.2 / 83.2 | 74.6 / 79.0 | 75.7 / 78.6 | 75.9 / 77.6 | 81.0 / 81.3 | 78.6 / 86.4 | 80.9 / 85.1 | 78.6 / 86.4 | | STL10 (N=10) | 95.3 / 55.2 | 97.2 / 61.7 | 95.4 / 54.9 | 91.3 / 56.2 | 88.0 / 57.6 | 96.3 / 56.6 | 96.1 / 55.9 | 94.5 / 52.4 | 97.3 / 50.5 | | Caltech101 (N=102) | 81.1 / 43.4 | 82.8 / 50.5 | 83.7 / 46.8 | 85.5 / 44.9 | 77.1 / 46.2 | 91.4 / 50.1 | 82.4 / 51.1 | 94.7 / 47.8 | 97.1 / 46.1 | | Stanford40 (N=196) | 72.7 / 31.4 | 73.3 / 31.4 | 74.1 / 31.4 | 75.1 / 31.4 | 76.1 / 31.4 | 77.1 / 31.4 | 78.1 / 31.4 | 79.1 / 31.4 | 78.1 / 31.4 | | Flowers102 (N=102) | 70.5 / 55.1 | 67.6 / 60.8 | 65.6 / 56.3 | 68.5 / 59.0 | 67.5 / 54.8 | 71.5 / 52.2 | 69.5 / 59.4 | 74.6 / 63.7 | 76.6 / 61.4 | | GTSRB (N=43) | 17.2 / 16.6 | 15.6 / 17.2 | 16.7 / 16.7 | 16.2 / 17.1 | 16.7 / 17.3 | 16.9 / 17.2 | 15.1 / 17.2 | 16.1 / 16.9 | 16.6 / 16.5 | | Food101 (N=101) | 54.3 / 52.4 | 51.4 / 60.1 | 53.2 / 54.8 | 51.8 / 57.3 | 53.5 / 56.9 | 48.1 / 52.8 | 51.1 / 60.5 | 55.7 / 59.1 | 57.4 / 56.9 | | EuroSAT (N=10) | 54.3 / 52.4 | 51.4 / 60.1 | 53.2 / 54.8 | 51.8 / 57.3 | 53.5 / 56.9 | 48.1 / 52.8 | 51.1 / 60.5 | 55.7 / 59.1 | 57.4 / 56.9 | | FGVC-Aircraft (N=100) | 54.7 / 8 | 56.6 / 10.6 | 55.7 / 8.4 | 56.9 / 9.3 | 57.9 / 7.4 | 55.7 / 9.3 | 60.0 / 10.4 | 63.5 / 8.8 | 65.7 / 6.5 | | Avg | 23.0 / 22.0 | 23.0 / 22.0 | 23.0 / 22.0 | 23.0 / 22.0 | 23.0 / 22.0 | 23.0 / 22.0 | 23.0 / 22.0 | 23.0 / 22.0 | 23.0 / 22.0 | | Pos BWT | 0.0 / 0.0 | 13.6 / 6.4 | 10.7 / 3.6 | 16.7 / 3.9 | 21.7 / 5.6 | 10.1 / 3.5 | 13.1 / 4.8 | 11.2 / 5.9 | 10.7 / 5.5 | | Neg BWT | 0.0 / 0.0 | -10.0 / -13.5 | -4.0 / -2.4 | -2.1 / -2.0 | -3.3 / -1.3 | -5.3 / -2.3 | -2.6 / -1.1 | -2.0 / -1.9 | -1.2 / -2.9 | | Task 2: ImageNet | Fine-tuning | Fine-tuning | EWC | BatchEnsemble | GPM | CLIP-Adapter | PAINT | SNP | SNP++ (Add.) | SNP++ (Inter.) | |------------------|-------------|-------------|-----|---------------|-----|-------------|-------|-----|--------------|--------------| | ImageNet (N=1000) | 15.7 / 11.7 | 80.9 / 28.8 | 65.7 / 24.9 | 71.1 / 24.6 | 67.1 / 25.8 | 76.7 / 25.4 | 77.0 / 25.0 | 76.6 / 27.3 | 77.3 / 27.0 | | CIFAR100 (N=100) | 6.9 / 12.7 | 12.1 / 12.0 | 10.3 / 9.8 | 10.1 / 8.8 | 7.8 / 12.2 | 10.1 / 9.2 | 14.2 / 8.5 | 13.8 / 10.2 | 13.9 / 10.0 | | STL10 (N=10) | 91.5 / 45.3 | 95.2 / 62.9 | 73.1 / 38.7 | 83.6 / 47.3 | 88.2 / 47.8 | 91.4 / 53.9 | 91.5 / 44.7 | 90.6 / 46.4 | 91.6 / 45.7 | | Caltech101 (N=102) | 96.4 / 95.7 | 88.2 / 85.0 | 85.0 / 82.6 | 88.2 / 85.0 | 88.8 / 84.2 | 88.4 / 83.2 | 88.4 / 83.2 | 89.1 / 84.9 | 90.1 / 84.9 | | Stanford40 (N=196) | 29.6 / 16.6 | 31.6 / 16.6 | 31.3 / 16.6 | 22.6 / 16.6 | 31.5 / 16.6 | 29.9 / 16.6 | 36.6 / 16.6 | 36.6 / 16.6 | 36.6 / 16.6 | | Flowers102 (N=102) | 4.7 / 47.8 | 5.8 / 61.6 | 5.9 / 43.6 | 6.6 / 48.0 | 5.1 / 49.2 | 5.6 / 55.5 | 6.6 / 47.5 | 5.9 / 50.1 | 6.0 / 49.3 | | GTSRB (N=43) | 10.5 / 7.9 | 14.7 / 16.9 | 11.6 / 10.6 | 15.5 / 10.6 | 11.7 / 10.7 | 12.4 / 10.6 | 14.4 / 11.8 | 13.9 / 12.2 | 14.1 / 11.4 | | Food101 (N=101) | 8.2 / 8.3 | 8.2 / 8.3 | 6.7 / 6.7 | 6.7 / 6.7 | 6.7 / 6.7 | 6.7 / 6.7 | 6.7 / 6.7 | 6.7 / 6.7 | 6.7 / 6.7 | | EuroSAT (N=10) | 48.8 / 46.4 | 53.9 / 62.6 | 41.9 / 56.5 | 51.5 / 52.1 | 49.8 / 48.5 | 49.5 / 56.5 | 50.8 / 48.4 | 54.4 / 55.7 | 55.0 / 54.9 | | FGVC-Aircraft (N=100) | 5.4 / 6.6 | 5.4 / 9.3 | 6.0 / 6.0 | 5.3 / 7.0 | 5.4 / 7.0 | 5.4 / 7.0 | 5.4 / 7.0 | 5.8 / 7.8 | 6.1 / 7.8 | | Avg | 8.7 / 8.7 | 8.7 / 8.7 | 8.7 / 8.7 | 8.7 / 8.7 | 8.7 / 8.7 | 8.7 / 8.7 | 8.7 / 8.7 | 8.7 / 8.7 | 8.7 / 8.7 | | Pos BWT | 87.9 / 23.3 | 27.6 / 8.0 | 38.6 / 23.6 | 39.1 / 20.1 | 27.1 / 20.0 | 26.5 / 8.2 | 74.2 / 25.1 | 79.7 / 43.3 | 80.4 / 43.7 | | Neg BWT | 12.0 / 30.6 | -3.1 / -4.2 | -7.3 / -7.0 | -2.5 / -4.7 | -4.0 / -6.2 | -2.9 / -2.8 | -2.7 / -7.1 | -3.0 / -3.9 | -4.5 / -5.3 | Table 8: Variations in hyperparameters and subnetwork manipulation strategies with SNP(++). | Avg accuracy after Task 1 | Avg accuracy after Task 2 | Avg accuracy after Task 3 | |---------------------------|---------------------------|---------------------------| | Rehearsal-free ablations | Interpolation | Mode switching | | SNP ([base=0.1]) | 33.9 / 35.8 | 26.1 / 38.8 | 39.4 / 33.6 | | SNP ([base=0.5]) | 35.5 / 36.8 | 34.7 / 38.1 | 37.2 / 37.2 | | SNP ([base=1.0]) | 36.5 / 38.1 | 36.3 / 41.0 | 36.5 / 34.9 | | SNP++ (Add.) ([base=0.1]) | 36.5 / 35.4 | 37.0 / 36.6 | 38.6 / 35.8 | | SNP++ (Add.) ([base=0.5]) | 35.9 / 34.7 | 37.7 / 36.5 | 34.1 / 34.9 | | SNP++ (Add.) ([base=1.0]) | 37.0 / 38.3 | 37.1 / 36.8 | 34.7 / 36.2 | | SNP++ (Add.) ([base=1.0]) | 29.8 / 36.9 | 33.4 / 32.9 | 39.3 / 37.2 | Addition Removal of subnetworks existing base parameter is closest to the new task, and use this is the target base parameter. Then we can update the meta parameters such that, while the meta parameters drift and other non-target base parameter drift is minimized, the target base parameter is being updated towards the new task while performing well on its prior tasks. For mode switching, the parameter space has many functionally-diverse modes that we may wish to use and replace an existing subnetwork in-place. For example, we could replace a task’s base parameter with an adversarially-robust parameter (e.g. from adversarial training), or a backdoor-robust parameter (e.g from backdoor adversarial training), or domain-robust parameters, etc. Rather than using the task’s original training set to locate this mode, an alternative approach would actively sample the parameter space, and update the meta parameters and regularize the drift of the replaced mode’s base parameter such that the new base parameter is computed with respect to the specific task. While it is possible to actively sample the base parameters iteratively to identify the ideal base parameter mode, it poses a risk that the target mode may cause the meta parameter to drift beyond the subspace radius. Thus, for our evaluation of the identification of a sharpness-aware mode (for low-loss basins using SAM (Foret et al., 2021)), we actively sample meta parameters graduating from within the radius to outside of the radius, and for each sampled meta parameter we compute the base parameter, and evaluate whether it satisfies the mode’s evaluation condition (e.g. flat basin). 6 EXPERIMENTS 6.1 METHODOLOGY Model. We evaluate with CLIP (Radford et al., 2021), the standard vision-language model, specifically the pre-trained initialization of the ResNet-50 variant. For training/fine-tuning on a new task, we retain the Adam optimizer, decoupled weight decay regularization, temperature clipping, and a batch size of 32. From the pre-trained initialization, we train CLIP on MSCOCO for 50 epochs until both loss convergence and verification that zero/few-shot performance across the datasets is weakened. We fine-tune for 10 epochs and also validate loss convergence. Adaptation baselines. Fine-tuning (Single Task Learning) is a baseline in online/continual learning where the model sequentially learns each incoming task without any forgetting mitigation. Joint Training (Multi Task Learning) is a baseline where the model trains on all future tasks. We do not include the base task (MSCOCO), and evaluate when there are at least 2 tasks (Task 3). We evaluate against 5 baseline adaptation methods, including 3 general-purpose continual learning strategies, and 2 large-model-specific adaptation strategies (that have also been evaluated on CLIP). Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017) uses weight regularization to retain weights from previous tasks. The regularization strength for weight penalty $\lambda$ is 1,000. Gradient Projection Memory (GPM) (Saha et al., 2021) learns new tasks by performing gradient steps in the orthogonal direction to the gradient subspaces that are important to the past tasks. We use a 0.01 learning rate. BatchEnsemble (Wen et al., 2020) uses a base network (slow weights) and stores separate parameters (fast weights) to compute the parameters per ensemble, thus $N$ ensembles do not require $N$ sets of parameters. Each ensemble member is responsible for one task. We retain -0.5 random sign init for fast weights and 0.5 fast weights learning rate multiplier. CLIP-Adapter (Gao et al., 2021) fine-tunes with feature adapters, specifically an additional bottleneck layer to learn new features and perform residual-style feature blending with the original pre-trained features. In-line with the implementation for CLIP, we fine tune the visual adapter. PAINT (Illharco et al., 2022) is another vision-language model adaptation technique. It is a patching method that interpolates between parameters before fine-tuning and parameters after fine-tuning on a task to be patched. We implement sequential patching, where we iteratively repeat the patching procedure on each new task, and pick the mixing coefficient that optimizes average accuracy on the held-out validation sets from the supported and patching tasks. All baselines are trained with the CLIP model. Zero-shot. Unlike visual systems trained on a fixed set of discrete labels, Radford et al. (2021) proliferates a paradigm in learning to align images with texts in an open-vocabulary setting. For zero-shot classification, class labels are converted to sentences using prompt templates, and the model computes text embeddings. The model computes the image embedding, and an image-text (cosine) similarity score is computed between the image embedding and each class’ text embeddings. for strongest alignment. Similarity of embeddings per class, scaled by a temperature parameter, is normalized into a probability distribution via softmax. The class having highest similarity with the image is the predicted one. We perform zero-shot evaluation with the altered CLIP-Adapter model and with the task-indexed BatchEnsemble model. For other methods, the model architecture and parameters are available for directly applying this zero-shot evaluation procedure. **Few-shot.** For a given task, an N-way-K-shot support set (N classes, K samples per class) is provided for inference. Evaluation is performed on the query set. The meta learner computes the base parameters with respect to the support set. Specific to gradient-based meta learners, including MAML and SNP(++) , we compute the gradient of the support set with respect to model parameters, update the model parameters, then evaluate on the query set. For standard CLIP and the other methods, we use nearest-neighbor classification. We first compute the mean image features per class in the support set, then measure the (cosine) distance between them and the image features for a given query set image. Based on the nearest class mean, the closest class’ mean image features is the predicted class. ### 6.2 Maintaining Zero/Few-Shot Capabilities We tabulate our proposed method in comparison to baselines in Table 7, and proposed method in comparison to different configurations in Table 8. Transferability between tasks plays an important role. From task 1 to 3, we find that positive backward transfer exists across all baselines, and that some datasets have improved zero/few-shot performance with task shift. Furthermore, we find that the gradual removal of subnetworks with SNP++ may worsen performance. The removal of subnetworks is motivated by alleviating interference between task-specific representations between two tasks; in this case, it appears that the attempt at removal overwrites the transferable representations. Further sub-procedures that identify the optimal subnetworks to remove based on a transfer-interference trade-off can improve the utility of subnetwork removal, especially in a setting with many tasks. Adaptation techniques specialized for large models (CLIP in particular) outperform general-purpose continual learning methods. For large models, regularization-based methods that do not require task indexing or separate context vectors can perform competitively to non-regularization-based methods. Our proposed adaptation method, SNP and SNP++, outperforms existing baselines. It can consistently retain low negative backward transfer, fulfilling the objective of minimizing loss of zero/few-shot performance with task shift. It performs comparably in maximizing positive backward transfer. In terms of balancing between positive and negative backward transfer, SNP and SNP++ strikes the optimal balance, attaining the highest average accuracy. We find that our proposed method works better with the memory buffer that regularizes the base parameter drift. Though we are not storing trajectories or large replay buffers (only storing one support set instance), pure-regularization SNP can also perform in a stable fashion. Different hyperparameters of $\beta_{\text{base}}$ and $\beta_{\text{meta}}$ tend to retain similar performances, and no major loss in accuracy is observed. We do note that setting the $\beta_{\text{meta}}$ too low can worsen performance, particularly in comparison to non-SNP baselines. This may occur where the drift of the meta parameters is under-regulated, and regulating base parameter drift is insufficient and acts as a second-order regularizer of the meta parameter drift. We find that combining subnetworks and interpolating between them underperforms SNP and additive SNP++. Subnetwork addition/removal would manipulate the number of base parameters, but result in first-order interpolation between the unchanged meta parameters and new meta parameters with the modified subnetwork set. Thus, interpolating between subnetworks results in second-order interpolation, and the error with respect to each task accumulates when the meta parameters interpolate. Given a large number of tasks and lower model capacity, second-order interpolation offers an efficient subnetwork manipulation. ### 7 Conclusion By projecting a model’s subnetworks onto the same equidimensional parameter space as the (meta) parameters, we can edit the representations encoded in the network, including adding, removing, combining, and switching subnetworks. We apply this paradigm to achieve superior online/continual learning performance in retaining seen and zero/few-shot accuracy on prior and subsequent tasks. Not only does our method scale to large models, it also lays the foundation for further network editing applications, such as subnetwork removal for privacy (e.g., machine unlearning), or subnetwork addition for distributional robustness (e.g., adding distributionally-varied samples for fairness or adversarial robustness). REFERENCES Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural module networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016a. Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Learning to compose neural networks for question answering, 2016b. URL https://arxiv.org/abs/1601.01705 Yamini Bansal, Preetum Nakkiran, and Boaz Barak. Revisiting model stitching to compare neural representations. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021. URL https://openreview.net/forum?id=ak06J5jNR4 Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101 – mining discriminative components with random forests. In European Conference on Computer Vision, 2014. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020. URL https://arxiv.org/abs/2005.14165 Shoufa Chen, Chongjian GE, Zhan Tong, Jianglu Wang, Yibing Song, Jue Wang, and Ping Luo. Adaptformer: Adapting vision transformers for scalable visual recognition. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=ATiz_CDA66 Riccardo Del Chiaro, Bartlomiej Twardowski, Andrew D. Bagdanov, and Joost van de Weijer. Ratt: Recurrent attention to transient tasks for continual image captioning. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS’20, Red Hook, NY, USA, 2020. Curran Associates Inc. ISBN 9781713829546. Adam Coates, Andrew Ng, and Honglak Lee. An Analysis of Single Layer Networks in Unsupervised Feature Learning. In AISTATS, 2011. https://cs.stanford.edu/~acoates/papers/coatesleeng_aistats_2011.pdf Adrián Csiszárík, Péter Kőrösi-Szabó, Akos K. Matszagosz, Gergely Papp, and Dániel Varga. Similarity and matching of neural network representations, 2021. URL https://arxiv.org/abs/2110.14633 Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pp. 248–255. Ieee, 2009. Li Fei-Fei, Rob Fergus, and Pietro Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. Computer Vision and Pattern Recognition Workshop, 2004. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks, 2017a. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 1126–1135. PMLR, 06–11 Aug 2017b. URL http://proceedings.mlr.press/v70/finn17a.html Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=6Tm1poslrM
2fSyBPBfBs
Since it is possible to encounter the bad $x_{out}$ that {$||s||: s \in \partial_{\delta} \varphi(x_{out})$} is large, can you explain how to choose the output in Algorithm 2 in detail in the experiment?
Bilevel Optimization without Lower-Level Strong Convexity from the Hyper-Objective Perspective Anonymous authors Paper under double-blind review Abstract Bilevel optimization reveals the inner structure of otherwise oblique optimization problems, such as hyperparameter tuning, neural architecture search, and meta-learning. A common goal in bilevel optimization is to find stationary points of the hyper-objective function. Although this hyper-objective approach is widely used, its theoretical properties have not been thoroughly investigated in cases where the lower-level functions lack strong convexity. This work takes a step forward when the typical lower-level strong convexity assumption is absent. Our hardness results show that bilevel optimization for general convex lower-level functions is intractable to solve. We then identify several regularity conditions of the lower-level problems that can provably confer tractability. Under these conditions, we propose the Inexact Gradient-Free Method (IGFM), which uses the Switching Gradient Method (SGM) as an efficient sub-routine, to find an approximate stationary point of the hyper-objective in polynomial time. 1 Introduction The goal of bilevel optimization (BLO) is to minimize the upper-level (UL) function \( f(x, y) \) under the constraint that \( y \) is minimized w.r.t. the lower-level (LL) function \( g(x, y) \) on a closed convex set \( Y \subseteq \mathbb{R}^{d_y} \). Mathematically, it can be formulated as: \[ \min_{x \in \mathbb{R}^d_x, y \in Y^*(x)} f(x, y), \quad Y^*(x) = \arg \min_{y \in Y} g(x, y). \] (1) BLO in this form has received increasing attention due to its wide applications in many machine learning problems, including hyperparameter tuning (Franceschi et al., 2018; Pedregosa, 2016), neural architecture search (Liu et al., 2019; Wang et al., 2022b; Zoph & Le, 2016; Zhang et al., 2021), meta-learning (Franceschi et al., 2018; Hospedales et al., 2021; Ravi & Larochelle, 2017; Pham et al., 2021), out-of-distribution learning (Zhou et al., 2022), adversarial training (Goodfellow et al., 2020; Sinha et al., 2018; Lin et al., 2020a,b), reinforcement learning (Konda & Tsitsiklis, 1999; Hong et al., 2023), causal learning (Jiang & Veitch, 2022; Arjovsky et al., 2019). The hyper-objective approaches (Dempe, 2002; Dempe & Zemkoho, 2020; Liu et al., 2020, 2021) reformulate Problem (1) by \[ \min_{x \in \mathbb{R}^d_x} \varphi(x), \text{ where } \varphi(x) = \min_{y \in Y^*(x)} f(x, y) \text{ is called the hyper-objective.} \] (2) It transforms the problem into the composition of a simple BLO (Sabach & Shtern, 2017) w.r.t. the LL variable \( y \) and an unconstrained single-level optimization w.r.t. the UL variable \( x \). This reformulation naturally leads to two foundational problems: The first one involves P1: Find an optimal LL variable \( \hat{y} \in Y^*(\hat{x}) \) such that \( \varphi(\hat{x}) = f(\hat{x}, \hat{y}) \) for a given \( \hat{x} \). The second one involves P2: Find a UL variable \( \hat{x} \) that is a stationary point of \( \varphi(x) \). Both problems are easy to solve when the LL function is strongly convex. The lower-level strong convexity (LLSC) ensures \( Y^*(x) \) to be a singleton, and therefore simplifies Equation [2] into \( \varphi(x) = f(x, y^*(x)) \), where the LL optimal solution \( y^*(x) = \arg\min_{y \in Y} g(x, y) \) can be found via gradient descent on \( g \). If we further assume \( Y = \mathbb{R}^{d_y} \), then the implicit function theorem indicates: \[ \nabla \varphi(x) = \nabla_x f(x, y^*(x)) - \nabla^2_{xy} g(x, y^*(x)) [\nabla^2_{yy} g(x, y^*(x))]^{-1} \nabla_y f(x, y^*(x)). \] (3) Then one can apply the gradient step with \( \nabla \varphi(x) \) to find a UL stationary point. This forms the basis of the classical hyper-objective approaches for BLO with LLSC (Ji et al., 2021). However, these methods heavily rely on the LLSC condition that may not hold in many applications. This paper investigates BLO with only LL convexity, but without LLSC. Adding a regularization term to the LL function is a natural idea to ensure LLSC (Rajeswaran et al., 2019), but we show in Proposition 4.1 that any small regularization may lead to a large deviation on the hyper-objective. Furthermore, we construct hard instances to illustrate the intractability of BLO without LLSC, for both finding an LL optimal solution and a UL stationary point: Firstly, we prove a lower bound in Proposition 4.2 to show that \( \varphi(x) \) is not computable in finite iterations for general convex functions. Secondly, we give a pair of \( f(x, y) \) and \( g(x, y) \) in Example 4.1 such that the resulting hyper-objective \( \varphi(x) \) is discontinuous and thus intractable to optimize. The constructions of these hard instances rely on the fact that a general convex LL function can be arbitrarily “flat”. To avoid the intractability caused by the undesirable “flatness”, we introduce two sufficient conditions that can provably confer tractability to BLO with only LL convexity: the gradient dominance condition (Assumption 5.1) and the weak sharp minimum condition (Assumption 5.2). Under these conditions, we propose novel algorithms to find an LL optimal solution and a UL stationary point, with non-asymptotic convergence guarantees: **Finding an LL Optimal Solution.** We show that both conditions fall into a general class of the Hölderian error bound condition (Proposition G.1), under which we propose the Switching Gradient Method (SGM, Algorithm 1) to find an LL optimal solution in polynomial time (Theorem 6.1). **Finding a UL Stationary Point.** We prove in Proposition 5.1 that both conditions imply the Lipschitz continuity of the solution mapping \( Y^*(x) \), which is proved to be both sufficient and necessary for the Lipschitz continuity of \( \varphi(x) \) by Proposition 4.3. Under the Lipschitz continuity of \( \varphi(x) \), we then propose the Inexact Gradient-Free Method (IGFM, Algorithm 2), which can provably converge to a Goldstein stationary point (Zhang et al., 2020) of the hyper-objective by incorporating SGM as an efficient sub-routine. We compare the intractability and tractability results under different assumptions on the LL function in Table 1 and summarize our contributions as follows: 1. We formulate the LL optimality and UL stationary as valid criteria for BLO without LLSC (Section 3), which are necessary for an optimistic optimal solution (Dempe et al., 2006). 2. We provide hardness results to show that BLO without LLSC is generally intractable. Our analysis highlights the importance of sharpness in LL functions (Section 4). 3. We prove that when the LL function satisfies either the gradient dominance condition or the weak sharp minimum condition, the hyper-objective \( \varphi(x) \) is Lipschitz and thus Clarke differentiable (Section 5). 4. We propose novel polynomial time algorithms for BLO with LL convexity under either the gradient dominance or the weak sharp minimum condition (Section 6). 5. We conduct numerical experiments on adversarial training and hyperparameter tuning that showcase the superiority of our methods (Section 7). ## 2 RELATED WORKS **BLO with LLSC.** Approximate implicit differentiation (AID) (Domke, 2012; Ghadimi & Wang, 2018; Pedregosa, 2016; Franceschi et al., 2018; Grazi et al., 2020; Ji et al., 2021) and iterative differentiation (ITD) (Gould et al., 2016; Franceschi et al., 2017; Shaban et al., 2019; Bolte et al., | Assumption on LL function | LL Optimality | UL Stationary | Reference | |--------------------------|--------------|---------------|-----------| | Strongly convex | Tractable | Tractable | Known result | | Convex with dominant gradients | Tractable | Tractable | Proved by this work | | Convex with weak sharp minimum | Tractable | Tractable | Proved by this work | | Only convex | Intractable | Intractable | Proved by this work | Table 1: An overview of the theoretical results in this paper. We show that BLO without LLSC is generally intractable, but becomes tractable when the LL function satisfies either the gradient dominance or the weak sharp minimum condition. are two representative methods that have non-asymptotically convergence to a UL stationary point for BLO with LLSC. Due to their popularity, many improvements to AID and ITD have also been proposed (Chen et al., 2022; Hong et al., 2023; Yang et al., 2021; Ji & Liang, 2021; Ji et al., 2022; Dagréou et al., 2022). BLO without LLSC. In the absence of LLSC, Arbel & Mairal (2022) showed that one can extend AID by replacing the inverse in Equation 5 with the Moore-Penrose inverse under the Morse-Bott condition on the manifold \( \{ y \in \mathbb{R}^{d_y} : \nabla_y f(x, y) = 0 \} \). Liu et al. (2021, 2020) extended ITD by proposing various methods to update the LL variable. However, all the methods mentioned above are limited to asymptotic convergence to an LL optimal solution and lack analysis for finding a UL stationary point. Due to the challenge of directly optimizing the hyper-objective, some concurrent works (Liu et al., 2022; Sow et al., 2022) reformulate Problem 1 via the value-function approach and show non-asymptotic convergence to the KKT points of this equivalent problem. However, since classical constraint qualifications provably fail for the reformulated problem (Ye & Zhu, 1995), the KKT condition is even not a necessary condition for a local minimum (Example A.1). In contrast, a UL stationary point is always a necessary condition. We leave a detailed comparison of our hyper-objective approach and value-function approach in Appendix A. 3 PRELIMINARIES 3.1 NOTATIONS AND BACKGROUNDS Basic Notation. Throughout this paper, we denote the LL solution mapping as \( Y^*(x) = \arg\min_{y \in Y} g(x, y) \), the LL value function as \( g^*(x) = \min_{y \in Y} g(x, y) \), and the hyper-objective as \( \varphi(x) = \min_{y \in Y^*(x)} f(x, y) \). If \( \varphi(x) \) has a finite minimum, we denote \( \varphi^* = \inf_{x \in \mathbb{R}^{d_x}} \varphi(x) \). We use \( \| \cdot \| \) to denote the \( \ell_2 \)-norm of a vector, and \( z[j] \) to denote the \( j \)-th coordinate of vector \( z \). We use \( B_\delta(z) = \{ z' : \| z' - z \| \leq \delta \} \) to denote the \( \ell_2 \)-ball centered at \( z \) with radius \( \delta \). We let \( \sigma_{\max}(A) \) to be the largest singular value of matrix \( A \), and \( \sigma_{\min}^+(A) \) to be its smallest non-zero singular value. Constrained Optimization. To tackle the possible constraint in \( y \), we introduce the definitions of projection and generalized gradient (Nesterov, 2018) as follows. Definition 3.1 (Projection). We define the projection onto a set \( Y \) by \( P_Y(\cdot) := \arg\min_{y \in Y} \| y - \cdot \| \). Definition 3.2 (Generalized Gradient). For a \( L \)-gradient Lipschitz function \( g(x, y) \) with \( y \in Y \), we define the generalized gradient with respect to \( y \) by \( G_\eta(y; x) := (y - P_Y(y - \eta \nabla_y g(x, y))) / \eta \) with some \( 0 < \eta \leq 1/L \). Note that the generalized gradient reduced to \( \nabla_y g(x, y) \) when \( Y = \mathbb{R}^{d_y} \). Set-Valued Analysis. A classic notion of distance in set-valued analysis is the Hausdorff distance (Rockafellar & Wets, 2009), formally defined as follows. Definition 3.3 (Hausdorff Distance). The Hausdorff distance between two sets \( S_1, S_2 \) is defined as \[ \text{dist}(S_1, S_2) = \max \left\{ \sup_{x_1 \in S_1} \inf_{x_2 \in S_2} \| x_1 - x_2 \|, \sup_{x_2 \in S_2} \inf_{x_1 \in S_1} \| x_1 - x_2 \| \right\} \] This allows us to define the Lipschitz continuity of set-valued mappings as follows. Definition 3.4. We call a set-valued mapping \( S(x) : \mathbb{R}^{d_1} \rightarrow \mathbb{R}^{d_2} \) locally Lipschitz if for any \( x \in \mathbb{R}^{d_1} \), there exists \( \delta > 0 \) and \( L > 0 \) such that for any \( x' \in \mathbb{R}^{d_1} \) satisfying \( \|x' - x\| \leq \delta \), we have \( \text{dist}(S(x), S(x')) \leq L\|x' - x\| \). We call \( S(x) \) Lipschitz if we can let \( \delta \rightarrow \infty \). Note that the above definition generalizes the Lipschitz continuity for a single-valued mapping. Nonsmooth Analysis. The following Clarke subdifferential (Clarke [1990]) generalizes both the gradients of differentiable functions and the subgradients of convex functions. Definition 3.5 (Clarke Subdifferential). The Clarke subdifferential of a locally Lipschitz function \( h(x) : \mathbb{R}^d \rightarrow \mathbb{R} \) at a point \( x \in \mathbb{R}^d \) is defined by \[ \partial h(x) := \text{Conv} \left\{ s \in \mathbb{R}^d : \exists x_k \rightarrow x, \nabla h(x_k) \rightarrow s, \text{ s.t. } \nabla h(x_k) \text{ exists for all } k \right\}. \] It can be proved that finding a point with a small Clarke subdifferential is generally intractable for a nonsmooth nonconvex function (Zhang et al., 2020). So we need to consider the following relaxed definition of stationarity for non-asymptotic analysis in nonsmooth nonconvex optimization (Zhang et al., 2020; Tian et al., 2022; Davis et al., 2022; Jordan et al., 2023; Kornowski & Shamir, 2021; Lin et al., 2022; Cutkosky et al., 2023; Kornowski & Shamir, 2023). Definition 3.6 (Approximate Goldstein Stationary Point). Given a locally Lipschitz function \( h(x) : \mathbb{R}^d \rightarrow \mathbb{R} \), we call \( x \in \mathbb{R}^d \) a \((\delta, \varepsilon)\)-Goldstein stationary point if \( \min \{\|s\| : s \in \partial_\delta h(x)\} \leq \varepsilon \), where \( \partial_\delta h(x) := \text{Conv} \left\{ \bigcup_{x' \in B_\delta(x)} \partial h(x') \right\} \) is the Goldstein subdifferential (Goldstein, 1977). 3.2 THE OPTIMALITY CONDITIONS This section introduces the optimality conditions for BLO without LLSC used in this paper. Firstly, we recall the definition of the optimistic optimal solution (Dempe et al., 2006), which is a standard optimality condition for the hyper-objective reformulation. Definition 3.7. A pair of point \((x^*, y^*)\) is called a locally optimistic optimal solution to Problem 7 if \( y^* \in Y^*(x^*) \) and there exists \( \delta > 0 \) such that we have \( \varphi(x^*) \leq \varphi(x) \) and \( f(x^*, y^*) \leq f(x, y) \) for all \((x, y) \in B_\delta(x^*, y^*)\). It is called a globally optimistic optimal solution if we can let \( \delta \rightarrow \infty \). A globally optimistic optimal solution is an exact solution to Problem 1 but its computation is NP-hard since \( \varphi(x) \) is generally nonconvex (Danilova et al., 2020). A common relaxation is to find a locally optimistic optimal solution, for which we can derive the following necessary conditions. Proposition 3.1. Suppose \( f(x, \cdot) \) and \( g(x, \cdot) \) are convex, and \( \varphi(x) \) is locally Lipschitz. Then for any locally optimistic optimal solution \((x^*, y^*)\), we have \( \partial \varphi(x^*) = 0 \), \( f(x^*, y^*) = \varphi(x^*) \) and \( g(x^*, y^*) = g^*(x^*) \). It motivates us to use the following criteria for non-asymptotic analysis: Definition 3.8 (UL Stationary). Suppose \( \varphi(x) \) is locally Lipschitz. We call \( \hat{x} \) a \((\delta, \varepsilon)\)-UL stationary point if it is a \((\delta, \varepsilon)\)-Goldstein stationary point of \( \varphi(x) \). Definition 3.9 (LL Optimality). Fix an \( x \). Suppose \( f(x, \cdot) \) and \( g(x, \cdot) \) are convex. We call \( \hat{y} \) a \((\zeta_f, \zeta_g)\)-LL optimal solution if we have \( |f(x, \hat{y}) - \varphi(x)| \leq \zeta_f \) and \( g(x, \hat{y}) - g^*(x) \leq \zeta_g \). The main focus of this paper is to discuss when and how one can design a polynomial time algorithm to achieve the above goals for any given positive precision \( \delta, \varepsilon, \zeta_f, \zeta_g \). Remark 3.1. In Definition 3.8, we assume that \( \varphi(x) \) is locally Lipschitz, which is one of the mildest conditions to ensure Clarke differentiability. However, it may not hold for BLO without LLSC, and we will give the sufficient and necessary condition for it later in Proposition 4.3. Definition 3.8 adopts the Goldstein stationary points since \( \varphi(x) \) can be nonconvex nonsmooth such that traditional stationary points may be intractable, as we will show later in Example 5.1. 4 HARDNESS RESULTS FOR INTRACTABILITY In this section, we provide various hardness results to show the challenges of BLO without LLSC. We first explain why one cannot manually regularize the LL function to ensure the LLSC condition. Subsequently, we demonstrate that both the tasks of finding an LL optimal solution and finding a UL stationary point can be intractable for BLO without LLSC. 4.1 Can Regularization Help? One natural way to tackle BLO without LLSC is to add some small quadratic terms and then apply an algorithm designed under LLSC (Rajeswaran et al., 2019). However, we show that the regularization transforms $Y^*(x)$ from a set to a singleton, thus breaking the original problem structure. **Proposition 4.1.** Given a pivot $\hat{y}$, there exists a BLO instance, where both $f(x, y)$ and $g(x, y)$ are convex in $y$, and the resulting hyper-objective $\varphi(x)$ is a quadratic function, but for any $\lambda > 0$ the regularized hyper-objective $$\varphi_\lambda(x) = \min_{y \in Y^*_\lambda(x)} f(x, y), \quad Y^*_\lambda(x) = \arg\min_{y \in Y} g_\lambda(x, y) + \lambda \|y - \hat{y}\|^2$$ is a linear function with $|\inf_{x \in \mathbb{R}^{d_x}} \varphi_\lambda(x) - \inf_{x \in \mathbb{R}^{d_x}} \varphi(x)| = \infty$. This example indicates that even if the regularization is arbitrarily small, the hyper-objective before and after regularization can be completely different objectives. Consequently, BLO without LLSC should be treated as a distinct research topic from BLO with LLSC. 4.2 Can we Find an LL Optimal Solution? The goal of finding an LL optimal solution for a given $x \in \mathbb{R}^{d_x}$ is to solve the following problem: $$\min_{y \in Y^*(x)} f(x, y), \quad Y^*(x) = \arg\min_{y \in Y} g(x, y).$$ (4) This problem is usually called simple BLO (Beck & Sabach, 2014; Sabach & Shtern, 2017; Kaushik & Yousehan, 2021) since it involves only one variable $y$. However, it is not a “simple” problem as the following theorem shows its intractability for general convex objectives. **Proposition 4.2.** Fix an $x$. For any $K \in \mathbb{N}^+$, there exists $d_y \in \mathbb{N}^+$, such that for any $y_0 \in \mathbb{R}^{d_y}$, there exist a 1-Lipschitz linear function $f(x, \cdot)$ and an 1-gradient Lipschitz convex function $g(x, \cdot)$ such that for any first-order algorithm $A$ which initializes from $y_0 \in Y$ with $\text{dist}(y_0, \arg\min_{y \in Y^*(x)} f(x, y)) \leq \sqrt{2}$ and generates a sequence of test points $\{y_k\}_{k=0}^K$ with $$y_k \in y_0 + \text{Span}\{\nabla_y f(x, y_0), \nabla_y g(x, y_0), \cdots, \nabla_y f(x, y_{k-1}), \nabla_y g(x, y_{k-1})\}, \quad k \geq 1,$$ it holds that $|f(x, y_k) - \varphi(x)| \geq 1$. The key idea in the proof is to construct the LL function using the worst-case convex zero chain (Nesterov, 2018), such that any first-order algorithm will require a large number of steps to approach the vicinity of the LL solution mapping $Y^*(x)$. The proof is provided in Appendix D, where we also prove a similar lower bound for Lipschitz LL objectives. 4.3 Can we Find a UL Stationary Point? Besides the difficulty in finding an LL optimal solution, the goal of finding a UL stationary point is also challenging. Below, we show that the hyper-objective $\varphi(x)$ can be discontinuous without LLSC. Since continuity is one of the basic assumptions for almost all numerical optimization schemes (Nocedal & Wright, 1999), our hard instance indicates that $\varphi(x)$ may be intrinsically intractable to optimize for BLO without LLSC. **Example 4.1.** Consider a BLO instance given by $$\min_{x \in \mathbb{R}, y \in Y^*(x)} x^2 + y, \quad Y^*(x) = \arg\min_{y \in [-1, 1]} -xy.$$ The resulting hyper-objective $\varphi(x) = x^2 + \text{sign}(x)$ is discontinuous at $x = 0$. In the above example, the discontinuity of $\varphi(x)$ comes from the discontinuity of $Y^*(x) = \text{sign}(x)$. Below, we prove that this statement and its reverse generally holds. **Proposition 4.3.** Suppose the solution mapping $Y^*(x)$ is non-empty and compact for any $x \in \mathbb{R}^{d_x}$. a. If $f(x, y)$ and $Y^*(x)$ are locally Lipschitz, then $\varphi(x)$ is locally Lipschitz. b. Conversely, if $\varphi(x)$ is locally Lipschitz for any locally Lipschitz function $f(x,y)$, then $Y^*(x)$ is locally Lipschitz. c. If $f(x,y)$ is $C_f$-Lipschitz and $Y^*(x)$ is $\kappa$-Lipschitz, then $\varphi(x)$ is $C_\varphi$-Lipschitz with coefficient $C_\varphi = (\kappa + 1)C_f$. d. Conversely, if $\varphi(x)$ is $C_\varphi$-Lipschitz for any $C_f$-Lipschitz function $f(x,y)$, then $Y^*(x)$ is $\kappa$-Lipschitz with coefficient $\kappa = C_\varphi/C_f$. Local Lipschitz continuity ensures UL stationary points (Definition 3.8) are well-defined, while global Lipschitz continuity enables uniform complexity bounds for non-asymptotic analysis (as we will use in Section 6.2). According to the above theorem, ensuring the continuity of $Y^*(x)$ is the key to obtaining the desired continuity of $\varphi(x)$. This motivates us to focus on well-behaved LL functions that confer continuity of $Y^*(x)$. 5 SUFFICIENT CONDITIONS FOR TRACTABILITY 5.1 REGULARITY CONDITIONS FOR CONTINUITY Since the constructions of the hard instances in the previous section all rely on very flat LL functions, our results underscore that sharpness of LL functions is essential to ensure the tractability of BLO. This observation inspires us to focus on more restricted function classes that possess sharpness to circumvent the ill-conditioned nature of BLO without LLSC. Below, we introduce two conditions that correspond to different degrees of sharpness. Assumption 5.1 (Gradient Dominance). Suppose $g(x,y)$ is $L$-gradient Lipschitz jointly in $(x,y)$, and there exists $\alpha > 0$ such that for any $x \in \mathbb{R}^{d_x}$, $y \in Y$ we have $G_{1/L}(y; x) \geq \alpha \text{dist}(y, Y^*(x))$. Assumption 5.2 (Weak Sharp Minimum). Suppose $g(x,y)$ is $L$-Lipschitz in $x$, and there exists $\alpha > 0$ such that for any $x \in \mathbb{R}^{d_x}$, $y \in Y$ we have $g(x,y) - g^*(x) \geq 2\alpha \text{dist}(y, Y^*(x))$. Both conditions are widely used in convex optimization (Burke & Ferris [1993], Drusvyatskiy & Lewis [2015]). They are milder conditions than LLSC by allowing $Y^*(x)$ to be non-singleton. Despite being more relaxed, we demonstrate below that either of them can lead to the continuity of $Y^*(x)$ and thus $\varphi(x)$. The continuity of $\varphi(x)$ is crucial for designing algorithms to optimize it. Proposition 5.1. Under Assumption 5.1 or 5.2, $Y^*(x)$ is $(L/\alpha)$-Lipschitz. Furthermore, if $f(x,y)$ is $C_f$-Lipschitz, then $\varphi(x)$ is $(L/\alpha + 1)C_f$-Lipschitz. Therefore, the introduced conditions can avoid discontinuous instances such as Example 4.1. It is worth noting that these conditions fundamentally differ from LLSC, as $\varphi(x)$ can be nonsmooth under these conditions, as exemplified below. The potential nonsmoothness of $\varphi(x)$ further justifies the rationality of using Goldstein stationarity in Definition 3.8. Example 5.1. Let $f(x,y) = xy$, $g(x,y) = 0$ and $Y = [-1, 1]$. We obtain a BLO instance satisfying both Assumption 5.1 and 5.2. But the resulting $\varphi(x) = -|x|$ is nonsmooth and nonconvex. 5.2 HOW TO VERIFY THE CONDITIONS? One may wonder how to verify the introduced conditions in applications. It is non-trivial as the value of $\text{dist}(y, Y^*(x))$ is unknown. An easy case is Assumption 5.1 with $Y = \mathbb{R}^{d_y}$, which reduces to the Polyak–Łojasiewicz condition (Polyak [1963]): $\|\nabla_y g(x,y)\|^2 \geq 2\alpha(g(x,y) - g^*(x))$ by Theorem 2 in Karimi et al. [2016]. This inequality allows us to identify the following examples that fall into Assumption 5.1. Firstly, we can show that Assumption 5.1 strictly covers the LLSC condition. Example 5.2. If $g$ is $L$-gradient Lipschitz and $\alpha$-strongly convex, then it satisfies Assumption 5.1. Secondly, the following example that both AID and ITD fail to optimize satisfies Assumption 5.1. Example 5.3. Consider the hard BLO instance proposed by Liu et al. [2027]: $$\min_{x \in \mathbb{R}, y \in Y^*(x)} (x - y_{[2]})^2 + (y_{[1]} - 1)^2, \quad Y^*(x) = \arg \min_{y \in \mathbb{R}^2} y_{[1]}^2 - 2xy_{[1]}.$$ The LL function satisfies Assumption 5.1 with $L = 1$ and $\alpha = 1/4$. Thirdly, the BLO with least squares loss studied by Bishop et al. (2020) also satisfies Assumption 5.1. We leave more details of this model and its application in adversarial training in Section 7.1. **Example 5.4.** Consider the BLO with least squares loss: $$\min_{x \in \mathbb{R}^{d_x}, y \in Y^*(x)} \frac{1}{2n} \|Ax - y\|^2_2, \quad Y^*(x) = \arg \min_{y \in \mathbb{R}^{d_y}} \frac{1}{2n} \|Ax - y\|^2_M + \frac{\lambda}{2n} \|y - b\|^2_M,$$ where $A \in \mathbb{R}^{n \times d_x}$, $b \in \mathbb{R}^n$ represents the features and labels of the $n$ samples in the dataset, $\lambda > 0$ and $M$ is a positive semi-definite matrix that induces the norm $\|z\|_M = \sqrt{z^\top M z}$. The LL function satisfies Assumption 5.1 with $L = (\lambda + 1)\sigma_{\max}(M)$ and $\alpha = (\lambda + 1)\sigma_{\min}^+(M)$. ### 6 THE PROPOSED METHODS In this section, we propose novel polynomial time algorithms for BLO under Assumption 5.1 and 5.2. In Section 6.1, we borrow ideas from switching gradient methods to overcome the difficulty of multiple LL minima. In Section 6.2, we propose a method motivated by zeroth-order optimization that can provably converge to a UL stationary point. #### 6.1 FINDING AN LL OPTIMAL SOLUTION VIA SWITCHING GRADIENT METHOD **Algorithm 1 SGM ($x, y_0, K_0, K, \tau, \theta$)** 1: $\mathcal{I} = \emptyset$, $\hat{y}_0 = y_0$ 2: for $k = 0, 1, \cdots, K_0 - 1$ 3: \hspace{1em} $\hat{y}_{k+1} = P_Y(y_k - \tau \partial_y g(x, \hat{y}_k))$ 4: end for 5: $\hat{g}^*(x) = g(x, \hat{y}_{K_0})$ 6: for $k = 0, 1, \cdots, K - 1$ 7: \hspace{1em} if $g(x, y_k) - \hat{g}^*(x) \leq 2\theta$ 8: \hspace{2em} $y_{k+1} = P_Y(y_k - \tau \partial_y f(x, y_k))$ 9: \hspace{2em} $\mathcal{I} = \mathcal{I} \cup \{k\}$ 10: else 11: \hspace{2em} $y_{k+1} = P_Y(y_k - \tau \partial_y g(x, y_k))$ 12: end for 13: $y_{\text{out}} = \frac{1}{|\mathcal{I}|} \sum_{k \in \mathcal{I}} y_k$ 14: return $y_{\text{out}}$ In Equation 4, the LL constraint $y \in Y^*(x)$ is equivalent to an inequality constraint $g(x, y) \leq g^*(x)$. Based on this observation, we generalize Polyak’s Switching Gradient Method (Polyak, 1967) for functional constrained problems to Algorithm 1 when the following assumptions hold. **Assumption 6.1.** Suppose that a. both $f(x, y)$ and $g(x, y)$ are convex in $y$; b. $\mathcal{Y}$ is compact with diameter $R$; c. $f(x, y)$ is $C_f$-Lipschitz on $\mathbb{R}^{d_x} \times \mathcal{Y}$; d. $g(x, \cdot)$ is $C_g$-Lipschitz on $\mathcal{Y}$ for any $x \in \mathbb{R}^{d_x}$; e. either Assumption 5.1 or 5.2 holds for $g(x, y)$. Under the above assumptions, we can prove the following result. **Theorem 6.1.** Fix an $x$. Under Assumption 6.1, Algorithm 1 with appropriate parameters can output a point $y_{\text{out}}$ satisfying $|f(x, y_{\text{out}}) - \varphi(x)| \leq \zeta$ and $g(x, y_{\text{out}}) - g^*(x) \leq \zeta$ with $O(\text{poly}(1/\zeta))$ first-order oracle calls from $g$. The corresponding proof and specific parameters of the algorithm can be found Appendix G. Algorithm 2 IGFM \((x_0, y_0, \eta, T, \delta, K_0, K, \tau, \theta)\) 1: Require: Sub-routine \(A\) can estimate \(\tilde{\varphi}(x) \approx \varphi(x)\) for any \(x \in \mathbb{R}^{d_x}\) 2: for \(t = 0, 1, \cdots, T - 1\) 3: Sample \(u_t \in \mathbb{R}^{d_x}\) uniformly from the unit sphere in \(\mathbb{R}^{d_x}\). 4: Estimate \(\tilde{\varphi}(x_t + \delta u_t)\) and \(\tilde{\varphi}(x_t - \delta u_t)\) by sub-routine \(A\). 5: \(\hat{\nabla}_t = \frac{d}{d\delta} (\tilde{\varphi}(x_t + \delta u_t) - \tilde{\varphi}(x_t - \delta u_t)) u_t\) 6: \(x_{t+1} = x_t - \eta \hat{\nabla}_t\) 7: end for 8: return \(x_{out}\) uniformly chosen from \(\{x_t\}_{t=0}^{T-1}\) 6.2 Finding a UL Stationary Point via Zeroth-Order Method Without LLSC, the hyper-gradient \(\nabla \varphi(x)\) may not have an explicit form as Equation (3). To tackle this challenge, we propose the Inexact Gradient-Free Method (IGFM) in Algorithm 2. The algorithm is motivated by recent advances in nonsmooth nonconvex zeroth-order optimization (Lin et al., 2022). Our zeroth-order oracle \(\tilde{\varphi}(x) \approx \varphi(x)\) is “inexact” since it is an approximation from a sub-routine \(A\). Below, we show that when \(A\) can guarantee sufficient approximation precision, the IGFM can provably find a Goldstein stationary point of a Lipschitz hyper-objective function \(\varphi(x)\). Assumption 6.2. Suppose that a. \(\varphi(x)\) is \(C_\varphi\)-Lipschitz. b. \(A\) ensures \(|\tilde{\varphi}(x) - \varphi(x)| \leq O(\delta^2 / (d_x C_\varphi))\) for any \(x \in \mathbb{R}^{d_x}\). Theorem 6.2. Given any \(\varepsilon \lesssim C_f\). Let \(\Delta = \varphi(x_0) - \varphi^*\). Under Assumption 6.2, set \[ T = O \left( d_x^{3/2} \left( \frac{C_\varphi^4}{\varepsilon^4} + \frac{\Delta C_\varphi^3}{\delta^4} \right) \right), \quad \eta = \Theta \left( \sqrt{\frac{\delta (\Delta + \delta C_\varphi)}{d_x^{3/2} C_\varphi^3 T}} \right). \] Then Algorithm 2 can output a point \(x_{out}\) that satisfies \(\mathbb{E} \min \{ \|s\| : s \in \partial_\delta \varphi(x_{out}) \} \leq \varepsilon\). Now it remains to verify Assumption 6.2. Assumption 6.2a can be satisfied by Proposition 4.3 while Assumption 6.2b can be satisfied by Theorem 6.1. Therefore, we have the following result. Corollary 6.1. Suppose Assumption 6.1 holds. Set \(A\) as Algorithm 7. Then Algorithm 2 with appropriate parameters can output a \((\delta, \varepsilon)\)-Goldstein stationary point of \(\varphi(x)\) in expectation within \(O(\text{poly}(d_x, 1/\varepsilon, 1/\delta))\) zeroth-order and first-order oracle calls from \(f\) and \(g\). To the best of our knowledge, it is the first theoretical analysis that shows the non-asymptotic convergence to a UL stationary point for BLO without LLSC. 7 Numerical Experiments Table 2: MSE (mean ± std) achieved by different algorithms on the “abalone” dataset in adversarial training. | Method | MSE | |--------------|--------------| | AID | 1.781 ± 0.418| | ITD | 0.982 ± 0.015| | BGS | 0.995 ± 0.259| | BDA | 0.976 ± 0.014| | BOME | 0.999 ± 0.140| | IA-GM | 0.992 ± 0.013| | IGFM (Ours) | 0.936 ± 0.015| Table 3: Test accuracy (%) achieved by different algorithms on the MNIST dataset under different corruption rates \(p\) in hyperparameter tuning. | Method | \(p = 0.5\) | \(p = 0.3\) | \(p = 0.1\) | |--------------|-------------|-------------|-------------| | AID | 75.8 | 87.5 | 91.3 | | ITD | 75.8 | 87.5 | 91.3 | | BGS | 75.8 | 87.5 | 91.3 | | BDA | 81.2 | 89.3 | 91.5 | | BOME | 86.7 | 88.9 | 89.3 | | IA-GM | 86.9 | 90.3 | 90.5 | | IGFM (Ours) | 88.4 | 91.0 | 91.8 | In this section, we compare IGFM with different baselines, including AID with conjugate gradient (Maclaurin et al., 2015), ITD (Ji et al., 2021), BGS (Arbel & Mairal, 2022), BDA (Liu et al., BOME (Liu et al., 2022), and IA-GM (Liu et al., 2021) in the following two different applications of BLO without LLSC. ### 7.1 Adversarial Training Brückner & Scheffer (2011) proposed modeling adversarial training via BLO. In this model, the learner aims at finding the optimal parameter \( x \), subject to data \( y \) being modified by an adversarial data provider. Like Bishop et al. (2020); Wang et al. (2021; 2022a), we use least squares loss for both \( f \) and \( g \) as Example 5.4. In the LL loss, we use a diagonal matrix \( M \) to assign different weights to each sample, and a ridge term \( \|y - b\|_M^2 \) to penalize the data provider when manipulating the original labels \( b \). We set half the diagonal elements of \( M \) evenly in \([\sigma_{\min}, \sigma_{\max}]\) and the rest zero. We let \( \lambda = 1 \), \( \sigma_{\max} = 1 \) and \( \sigma_{\min} = 10^{-9} \). For BDA, we choose \( s_u = s_l = 1 \), \( \alpha_k = \mu/(k + 1) \) and tune \( \mu \) from \{0.1, 0.5, 0.9\} as Liu et al. (2020). For BOME, we choose the default option for \( \phi_k \) and \( \eta \) from \{0.9, 0.5, 0.1\} as Liu et al. (2022). For IGFM, we choose \( \delta = 10^{-3} \) and tune \( \theta \) from \{10^{-1}, 10^{-2}, 10^{-3}\}. For all algorithms, we tune the learning rates in \{10^2, 10^1, 10^0, 10^{-1}, 10^{-2}, 10^{-3}, 10^{-4}, 10^{-5}\}. We run all the algorithms for 500 UL iterations, with 10 LL iterations per UL iteration. Table 2 compares the mean squared error (MSE), measured by the value of \( \varphi(x) \), achieved by the algorithms on the “abalone” dataset from LIBSVM (Chang & Lin, 2011). AID has poor performance because it requires taking the inverse of \( \nabla_y^2 g(x, y) \), which is ill-conditioned in this experiment. Among all the algorithms, the IGFM achieves the lowest mean value of MSE, and its variance is also maintained at a relatively low level. ### 7.2 Hyperparameter Tuning We consider tuning the optimal \( \ell_2 \) regularization for logistic regression to avoid overfitting a noisy training set \( D^{tr} \), based on the performance on a clean validation set \( D^{val} \). We let the UL variable \( x \) be the log-transformed regularization coefficient to avoid the constraint \( x \geq 0 \) (Pedregosa, 2016; Bertrand et al., 2020), and the LL variable \( y \) be the weight of the model. The problem can be formulated as BLO with: \[ f(x, y) = \frac{1}{|D^{val}|} \sum_{(a_i, b_i) \in D^{val}} \ell(\langle a_i, y \rangle, b_i), \] \[ g(x, y) = \frac{1}{|D^{tr}|} \sum_{(a_i, b_i) \in D^{tr}} \ell(\langle a_i, y \rangle, b_i) + \exp(x)\|y\|^2, \] where \((a_i, b_i)\) is the \(i\)-th feature-label pair in the dataset, and \( \ell(\cdot, \cdot) \) is the cross-entropy loss. We use the MNIST dataset (LeCun, 1998) in this experiment. We use 40,000 images for \( D^{tr} \) and 20,000 images for \( D^{val} \). We corrupt \( D^{tr} \) by assigning random labels with probability \( p \) (Liu et al., 2022). We follow the same hyperparameter selection strategy as Section 7.1, and run all the algorithms with 100 UL iterations. Table 3 reports the accuracy evaluated on the testing set with 10,000 images under different levels of \( p \). It can be seen that IGFM achieves the highest accuracy among all algorithms. Note that AID / ITD / BGS have similar performances since AID and ITD are proven to be consistent under LLSC (Ji et al., 2021) and BGS is a combination of them. ### 8 Conclusions and Discussions This paper gives a comprehensive study of BLO without the typical LLSC assumption. We provide hardness results to show the intractability of this problem and introduce several key regularity conditions that can confer tractability. Novel algorithms with non-asymptotic convergence are proposed as well. Experiments on real-world datasets support our theoretical investigations. Although this paper focuses primarily on the theoretical level, we expect our theory can shed light on efficient algorithm design for BLO applications in practice. We also hope our work can be a good starting point for non-asymptotic analysis for more challenging BLO problems, such as BLO with nonconvex LL functions or BLO with intertwined inequality constraints \( h(x, y) \leq 0 \). REFERENCES Michael Arbel and Julien Mairal. Non-convex bilevel games with critical point selection maps. In NeurIPS, 2022. Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019. Amir Beck and Shoham Sabach. A first order method for finding minimal norm-like solutions of convex optimization problems. Mathematical Programming, 147(1):25–46, 2014. Quentin Bertrand, Quentin Klopfenstein, Mathieu Blondel, Samuel Vaiter, Alexandre Gramfort, and Joseph Salmon. Implicit differentiation of lasso-type models for hyperparameter optimization. In ICML, 2020. Nicholas Bishop, Long Tran-Thanh, and Enrico Gerding. Optimal learning from verified training data. In NeurIPS, 2020. Jérôme Bolte, Tam Le, Edouard Pauwels, and Tony Silveti-Falls. Nonsmooth implicit differentiation for machine-learning and optimization. In NeurIPS, 2021. Michael Brückner and Tobias Scheffer. Stackelberg games for adversarial prediction problems. In SIGKDD, 2011. Sébastien Bubeck et al. Convex optimization: Algorithms and complexity. Foundations and Trends® in Machine Learning, 8(3-4):231–357, 2015. James V. Burke and Michael C. Ferris. Weak sharp minima in mathematical programming. SIAM Journal on Control and Optimization, 31(5):1340–1359, 1993. Chih-Chung Chang and Chih-Jen Lin. LIBSVM: a library for support vector machines. ACM transactions on intelligent systems and technology, 2(3):1–27, 2011. Tianyi Chen, Yuejiao Sun, and Wotao Yin. A single-timescale stochastic bilevel optimization method. In AISTATS, 2022. Frank H. Clarke. Optimization and nonsmooth analysis. SIAM, 1990. Christian Clason. Nonsmooth analysis and optimization. arXiv preprint arXiv:1708.04180, 2017. Ashok Cutkosky, Harsh Mehta, and Francesco Orabona. Optimal stochastic non-smooth non-convex optimization through online-to-non-convex conversion. In ICML, 2023. Mathieu Dagréou, Pierre Ablin, Samuel Vaiter, and Thomas Moreau. A framework for bilevel optimization that enables stochastic and global variance reduction algorithms. In NeurIPS, 2022. Marina Danilova, Pavel Dvurechensky, Alexander Gasnikov, Eduard Gorbunov, Sergey Guminov, Dmitry Kamzolov, and Innokenti Shibaev. Recent theoretical advances in non-convex optimization. arXiv preprint arXiv:2012.06188, 2020. Damek Davis, Dmitriy Drusvyatskiy, Yin Tat Lee, Swati Padmanabhan, and Guanghao Ye. A gradient sampling method with complexity guarantees for lipschitz functions in high and low dimensions. In NeurIPS, 2022. Stephan Dempe. Foundations of bilevel programming. Springer Science & Business Media, 2002. Stephan Dempe and Alain Zemkoho. Bilevel optimization. Springer optimization and its applications, 161, 2020. Stephan Dempe, Vyatcheslav V. Kalashnikov, and Nataliya Kalashnykova. Optimality conditions for bilevel programming problems. Optimization with Multivalued Mappings: Theory, Applications, and Algorithms, pp. 3–28, 2006. Justin Domke. Generic methods for optimization-based modeling. In AISTATS, 2012.
sOXKeeVxqW
However, the paper falls short in providing a comprehensive explanation regarding the rationale behind their selection and the potential implications if alternative encoders were to be used. Elaborating on the specific reasons for choosing these models and discussing the potential consequences of substituting them with other models is essential to enhance the transparency and completeness of the methodology.
MOleSG: A Multi-Modality Molecular Pre-training Framework by Joint Non-overlapping Masked Reconstruction of SMILES and Graph Anonymous authors Paper under double-blind review Abstract Self-supervised pre-training plays an important role in molecular representation learning because labeled molecular data are usually limited in many tasks, such as chemical property prediction and virtual screening. However, most existing molecular pre-training methods focus on one modality of molecular data, and the complementary information of two important modalities, SMILES and graph, are not fully explored. In this study, we propose a straightforward yet effective multi-modality pre-training framework for Molecular SMILES and Graph (MoleSG). Specifically, the SMILES sequence data and graph data are first tokenized so that they can be processed by a unified transformer-based backbone network, which is trained by a masked reconstruction strategy. In addition, we introduce a specialized non-overlapping masking strategy to encourage fine-grained interaction between these two modalities. Experimental results show that our framework achieves state-of-the-art performance in a series of molecular property prediction tasks, and detailed ablation study demonstrates efficacy of the multi-modality structure and the masking strategy. 1 Introduction Efficient molecular representation learning is foundational to drug discovery (David et al., 2020; Huang & Von Lilienfeld, 2016). With the advancement of deep learning, data-driven molecular representation learning has found applications in various domains, such as chemical property prediction (Duvenaud et al., 2015), virtual screening (Stumpfe & Bajorath, 2020), molecular design (Magar et al., 2021), and more. However, since most molecular label data need to be obtained through labor-intensive and costly wet experiments (Brown et al., 2019), there is a lack of sufficient labeled molecular data, which hinders the development of deep learning methods and can lead to issues like overfitting and poor generalization (Rong et al., 2020). Self-supervised learning holds substantial research value in addressing these challenges, which involves pre-training on unlabeled data and fine-tuning with labeled data on downstream tasks. It has shown significant promise in enhancing the performance of molecular representation learning on many downstream tasks (Xie et al., 2022). Molecules can be described using various modalities, such as fingerprints, sequences, graphs, and more (Xia et al., 2023). Currently, molecular pre-training predominantly focuses on a single modality (Xia et al., 2023), with only a little attention given to methods jointly dealing with multiple modalities (Liu et al., 2021; Zhu et al., 2021). This paper addresses the issue of jointly pre-training on two molecule modalities: Simplified Molecular-Input Line-Entry system (SMILES) (Weninger, 1988) and molecular graph. As depicted in Figure 1, the same molecule can be represented using both a SMILES sequence and a graph, with each modality having its unique advantages and disadvantages. SMILES is a compact implicit representation of the molecule that excludes single-bond representation, making it well-suited for rapid compound retrieval and identification (Quirós et al., 2018). Additionally, the SMILES sequence, being a text string, can be processed with transformer-based networks well-developed in the Natural Language Processing (NLP) field for feature extraction, in which the self-attention mechanism weights and combines information from any position in the input sequence, thereby facilitating the capture of global contextual information (Chithrananda et al., 2020; Wang et al., 2019). However, SMILES representations only capture the relationships be- Figure 1: Comparison of two molecular representation modalities, SMILES and graph. (a) Illustration of the topological differences between the two modalities. SMILES represents topology implicitly, while graph displays explicit topology. (b) Difference in attention mechanisms used for feature processing in the two modalities. Global attention mechanism is usually used for SMILES while local attention mechanism can be easily implemented for graph. between atoms and bonds. They often struggle to capture the complex structural and topological information of molecules, such as the number and positions of rings, the length of side chains, and other intricate details that can be crucial in drug efficacy prediction (Lim et al., 2021; Zhang et al., 2022). Graph representations offer explicit portrayals of atoms, bonds, and their interconnections, showcasing the topological structure of molecules (Xiong et al., 2019). They provide detailed chemical information about molecules, including attributes for each atom such as element type, charge state, stereochemistry, and attributes for each bond, like bond type and bond length (Hall et al., 1991). However, Graph Neural Networks (GNNs), commonly used to extract features from graphs, primarily rely on message-passing layers to gather information from neighboring nodes, emphasizing the capture of local contextual information. This can lead to a disadvantage in capturing global context information due to information decay when delivering messages between non-adjacent nodes (Zhou et al., 2020). As a result, for the same molecule, SMILES and graph encode molecular features from different perspectives, offering complementary information. The rational combination of these two modalities holds promise for enhancing molecular representation performance. There are several existing works on multi-modality molecular pre-training (Liu et al., 2021; Zhu et al., 2021; Liu et al., 2022). For example, GraphMVP (Liu et al., 2021) focuses on joint pre-training with 2D graphs and 3D graphs. However, these two modalities exhibit high similarity. Additionally, this study only proved 3D geometry complements 2D topology in downstream tasks, without proving 2D topology complements 3D geometry. DVMP (Zhu et al., 2021) first extracts features from SMILES and graph of the same molecule for contrastive learning. All these existing methods lack fine-grained cross-modality interactions, and there is no existing work that effectively explores the complementary information between SMILES and graph. The challenge of more efficiently combining these two modalities with significant differences lies in how to promote information exchange in fine-grain such as at the atom level rather than only achieving contrastive learning at the entire molecule level. In this paper, we propose MoleSG, a simple yet effective pre-training framework for effectively exploring the complementary information between SMILES and graph in molecular pre-training. Specifically, recognizing that both words in SMILES sequences and graph nodes can be treated as transformer tokens (Hu et al., 2023; Huang et al., 2022), we first introduce a transformer-based unified backbone network for jointly processing embeddings from both modalities to facilitate interactions between them. Our framework consists of two independent encoders to separately convert masked SMILES and masked graph of an input molecule into token embeddings. The embeddings from the two modalities are concatenated and inputted into a standard transformer for joint processing and the output is used to reconstruct the original SMILES and graph by two specific decoders. Our framework is trained by reconstruction losses. Furthermore, to enhance cross-modality interaction, we introduce a dedicated non-overlapping masking strategy, in which we establish the positional correspondence between the SMILES sequence and the graph of a molecule to ensure that regions masked in SMILES and graph do not overlap. Intuitively, the information used for reconstructing the masked tokens can come from the context within the same modality, as well as information from the tokens of corresponding structures in the other modality. Therefore, our non- overlapping masking strategy masks information within its own modality to encourage the model to learn information from the other modality, thereby strengthening interactions between the two modalities. To evaluate the effectiveness of MoleSG, we conduct experiments on 14 downstream tasks related to molecular property prediction and MoleSG achieves state-of-the-art (SOTA) performance in all tasks. We also compare it with the same network pre-trained by a single modality, and the experimental results show that multi-modality training learns richer molecular representation knowledge. Our contributions are as follows: (1) We propose MoleSG, a novel molecular pre-training framework that utilizes the complementary information of SMILES and graph representations, resulting in improved performance; (2) We introduce an innovative non-overlapping masking strategy and a unified network for handling two distinct modalities, allowing for fine-grained interaction between SMILES and graph representations and achieving better representation learning; (3) MoleSG achieves SOTA performance in a series of molecular property prediction tasks, and detailed ablation study demonstrates efficacy of the multi-modality structure and the masking strategy. 2 RELATED WORK Molecular single-modality self-supervised learning: Molecular single-modality self-supervised learning can be broadly categorized into contrastive and generative approaches. Most contrastive methods work on the modality of graph by bringing augmented graphs from the same molecule closer while pushing those from different molecules farther apart, and they focus on the global molecular information. For instance, MolCLR (Wang et al., 2022) employs diverse graph augmentation techniques for contrastive learning pre-training. FraSICL (Zhang et al., 2023) divides the same molecule into different fragment pairs based on semantics, enabling contrastive learning. KANO (Fang et al., 2023) incorporates an additional knowledge graph-based augmentation to improve the performance of contrastive learning. Generative approaches primarily predict masked molecular components using an encoder-decoder pattern, with an emphasis on learning information at the local level. For example, GROVER (Rong et al., 2020) is designed for the 2D graph modality and encompasses masked generative self-supervised tasks at the node and edge levels. Uni-mol (Zhou et al., 2023) focuses on the 3D graph modality and achieves effective 3D spatial representation learning through 3D position recovery and masked atom prediction tasks on a large dataset. Both SMILES-BERT (Wang et al., 2019) and ChemBERTa (Chithrananda et al., 2020) are designed for the SMILES modality and utilize a “cloze-style” generative pre-training approach. Molecular multi-modality self-supervised learning: GraphMVP (Liu et al., 2021) leverages correspondences and consistencies between 2D graph and 3D graph to perform both contrastive and generative self-supervised learning and inject 3D information into 2D molecular graph encoders. MoleculeSTM (Liu et al., 2022) focuses on molecular graphs and text descriptions, using a contrastive learning strategy to learn the consistency between the chemical structure of molecules and their textual descriptions. DVMP (Zhu et al., 2021) addresses both SMILES and graph modalities, employing a contrastive learning approach to learn SMILES information encoded by transformer and graph information encoded by GNN from the same molecule. DVMP focuses on the same two modalities as we do but it neglects interactions between fine-grained information across different modalities. 3 METHOD In this section, we will begin with providing an overview of our pre-training framework. Next, we will detail our data preprocessing procedures and introduce our innovative non-overlapping masking alignment strategy, which aims to encourage interaction between the two modalities. Following that, we will describe our network containing specialized encoders, backbone, and specialized decoders. 3.1 OVERVIEW OF MOLESG As shown in Figure 2, MoleSG learns features jointly from SMILES and graph by performing masked reconstruction on both modalities with a unified feature extraction backbone network. Concretely, for a given molecule, we first convert its SMILES sequence into tokens and calculate features for nodes and edges in the graph. Then, we randomly mask some node features in the graph and then mask a portion of SMILES tokens corresponding to the remaining unmasked atoms in the graph, so that we can perform non-overlapping masking to facilitate the interaction of information between the two modalities. Figure 2: Overview of MoleSG. The SMILES sequence and the graph of a molecule are first randomly masked using the non-overlapping masking strategy. Then they are individually encoded by independent encoders, and the SMILES embeddings and the graph embeddings are concatenated and inputted into a transformer backbone for joint processing. Finally, processed features belonging to each modality are decoded into token ids and graph nodes for the reconstruction proxy task. During pre-training, we employ a symmetric joint encoder-decoder framework to perform further feature extraction. The framework consists of two independent branches for the two modalities and a shared backbone for feature fusion. The independent encoder branches encode the data of two different modalities into a unified form i.e. embedding, which is suitable for understanding by a transformer backbone (Hu et al., 2023; Huang et al., 2022). The shared transformer backbone can learn the dependencies between atoms within and across the modalities and output features for the subsequent independent decoders. Finally, the SMILES decoder and the graph decoder reconstruct the original SMILES sequence and graph based on the output of the backbone. Different from prior works (Liu et al., 2021; Zhu et al., 2021; Zhang et al., 2023), the core of MoleSG lies in the specially designed masking strategy and the unified network capable of handling data of different modalities. We will introduce the details of our masking strategy in section 3.2, followed by a comprehensive presentation of our network architectures in section 3.3-3.5. Figure 3: Non-overlapping masking strategy. (a) Non-overlapping masking strategy: Masks in the SMILES sequence and the graph for the same molecule do not overlap. (b) Non-overlapping masking strategy pipeline: First, we establish a correspondence between atom index in both modalities. Then, random masking is applied to the graph, followed by mapping the masked atoms from the graph to the SMILES sequence. Finally, random masking on the SMILES sequence is implemented on the remaining unmasked atoms of the graph. 3.2 NON-OVERLAPPING MASKING STRATEGY The non-overlapping masking strategy we propose is illustrated in Figure 3, which can be divided into two steps, first performing atom index alignment between the two modalities, and then performing non-overlapping masking. Step 1: Atom index alignment. Initially, for a given input molecule, we define its molecular graph as $G = (V, E)$, where $V$ and $E$ represent the sets of atoms and edges, respectively. Following the method of CoMPT (Chen et al., 2021), we precompute the node features $V_{feature} = \{v_{f0}, v_{f1}, ..., v_{f(m-1)}\}$, where $m$ is the number of atoms and then represent the SMILES sequence as the set of a series of tokens $S_1 = \{s_0, s_1, ..., s_{n-1}\}$, where $n$ is the total number of tokens. The SMILES tokens can be categorized into three classes: (1) Atoms, including single-character atoms like C and N, as well as multi-character atoms like Ca and Au, and ions like [Cl-] and [Fe+3]; (2) Chemical bonds, represented by symbols like ‘#’ and ‘=’; (3) Other symbols, such as numbers ‘1’ and ‘2’ indicating the positions of atoms in a ring and parentheses ‘(’ and ‘)’ denoting containing side chains. Given that single bonds are often omitted in SMILES, achieving a one-to-one correspondence between two modalities for chemical bonds is not practical. Therefore, in this paper, we focus on aligning the atom index. Therefore, we gather the tokens representing the atoms and assign indexes to them to establish a consistent correspondence between atoms in graph $G_1$ and those in filtered SMILES tokens $S_2$. Step 2: Masking strategy. We randomly mask atomic features on the graph $M_G : G_1 \mapsto G_2$, where $G_2$ is the masked graph, and the set of masked atom indexes on $G_2$ is defined as $I_G$. Following that, we randomly mask atomic tokens on the SMILES sequence $M_S : S_2 \mapsto S_3$, where $S_3$ is the preliminary masked SMILES sequence, and the set of masked atom indexes on $S_3$ is denoted as $I_S$. To encourage better interaction between the two modalities, we set the overlap ratio between masked atoms in both modalities to be 0, forcing one modality to learn the “correct answer” from the other modality. Specifically, based on the one-to-one correspondence of atom index, we localize the positions of masked atoms onto the SMILES sequence. Through operation $P : I_S - I_G \cap I_S, S_3 \mapsto S_4$, where $S_4$ is the final masked SMILES sequence, we avoid masking atoms on the SMILES sequence that are already masked on the graph. 3.3 Encoder To facilitate the interaction of fine-grained features across different modalities, we use two independent encoders to convert the data of two entirely different modalities into embeddings of the same dimensions for being further processed by transformer. For the SMILES sequence, we adopt the method used in Roberta (Liu et al., 2019b). We first convert the masked SMILES sequence into a sequence of token ids following ChemBERTa (Chithramanda et al., 2020), and we expand its vocabulary by conducting a comprehensive analysis of all tokens in our dataset, as detailed in Appendix E. Then, we calculate their corresponding embeddings $F_S \in \mathbb{R}^{N_S \times d}$ by a vanilla transformer, where $N_S$ represents the number of SMILES tokens, and $d$ is the feature dimension. For the graph, we precompute the same node features and edge features as CoMPT (Chen et al., 2021) does. After that, a portion of node features are randomly masked, and then we feed them into the graph encoder. Our graph encoder is the same as that used in CoMPT (Chen et al., 2021), which consists of many message-passing layers. After repeating message-passing in the graph encoder, we finally obtain token embeddings $F_G \in \mathbb{R}^{N_G \times d}$ for nodes, where $N_G$ is the number of atoms, and $d$ is the feature dimension. 3.4 Unified backbone Given that two modalities are treated as embeddings of the same dimension, we can easily use a simple unified network to learn fine-grained features in both modalities. We first add trainable parameters to $F_S \in \mathbb{R}^{N_S \times d}$ and $F_G \in \mathbb{R}^{N_G \times d}$ and then concatenate them. The concatenated embeddings $F_{S,G} \in \mathbb{R}^{(N_S+N_G) \times d}$ are then fed into the backbone. Here, we use the transformer encoder employed in Roberta (Liu et al., 2019b) as the backbone network, and its multi-head self-attention mechanism can facilitate information interaction between token embeddings both within the same modalities and across different modalities. 3.5 Decoder After feature extraction in the backbone, we split the output features $F'_{S,G} \in \mathbb{R}^{(N_S+N_G) \times d}$ into features $F'_S \in \mathbb{R}^{N_S \times d}$ for SMILES and features $F'_G \in \mathbb{R}^{N_G \times d}$ for graph. $F'_S$ and $F'_G$ are features for individual modality-specific mask reconstruction tasks. Specifically, $F'_S$ is fed into LMhead in Roberta (Liu et al., 2019b) to predict the masked token ids, while $F'_C$ is inputted into a lightweight network GIN (Xu et al., 2018) after re-masking (Hou et al., 2022) to reconstruct the masked node features. We calculate the entropy loss $L_{EN}$ (Liu et al., 2019b) in SMILES reconstruction and the SCE loss $L_{SCE}$ (Hou et al., 2022) in graph reconstruction. Finally, the overall loss for the entire task is as follows: $L_{Total} = L_{EN} + L_{SCE}$. ### 3.6 Fine-tuning We conduct fine-tuning on 14 downstream tasks of predicting molecular properties. Since previous works only utilize a single modality in the downstream tasks, we also take a single modality as input to achieve a fair comparison. Moreover, as single modality input has an inconsistent distribution with two modalities, the backbone that takes two modalities as input during pre-training may suffer from performance decrease during fine-tuning. Therefore, we also discard the backbone during fine-tuning and inference. In other words, we only reserve a single special encoder during fine-tuning and inference. Our following experiment in section 4.3.3 also verifies it. ## 4 Experiments ### 4.1 Implementation Details **Datasets setup:** During the pre-training stage, we sample 250,000 unlabeled molecules from ZINC15 (Sterling & Irwin, 2015), which is a comprehensive collection of chemical compounds for drug discovery and computational chemistry research. During the fine-tuning stage, we utilize 14 benchmark datasets from MoleculeNet (Wu et al., 2018), covering molecular data from various domains, including pharmaceuticals, biology, chemistry, and physics. These downstream datasets include 678 binary classification tasks and 19 regression tasks. For more detailed information about benchmark datasets, please refer to Appendix A. We partition each benchmark dataset into the train, validation, and test sets in an 8:1:1 ratio. For all datasets except QM9, we employ scaffold splitting, reporting the mean and standard deviation of results from three random seeds for each benchmark. Scaffold splitting is a more challenging and realistic data partitioning method (Ramsundar et al., 2019). For the QM9 dataset, we follow the approach used in most prior work (Wang et al., 2022; Fang et al., 2023) for random splitting. **Pre-training:** We train MoleSG for 90k iterations using the AdamW optimizer with a base learning ratio of 1e-3. We set the masking ratio for graph at 25% and for SMILES at 15%. The details of the mask ratio setting experiments for the two modes are shown in Appendix C. **Downstream:** We set a maximum of 150 training epochs, with early stopping applied when the validation set’s best value is not improved for more than 20 epochs. We use the AdamW optimizer with a base learning rate of 1e-3 and a warmup factor of 0.1 for the first 30 epochs. **Competitors:** We compare MoleSG with both supervised (training from scratch) baselines and pre-trained baselines. Supervised methods include MPNN (Gilmer et al., 2017), DMPNN (Yang et al., 2019), CMPNN (Song et al., 2020), and CoMPT (Chen et al., 2021). Pre-training methods include N-gram (Liu et al., 2019a), PretrainGNN (Hu et al., 2019), MGSSL (Zhang et al., 2021), GROVER (Rong et al., 2020), GraphMVP (Liu et al., 2021), MolCLR (Wang et al., 2022), GEM (Fang et al., 2022), DVMP (Zhu et al., 2021), KANO (Fang et al., 2023), and Uni-mol (Zhou et al., 2023). The specific configurations for these competitors can be found in Appendix B. Additionally, for a fair comparison, we implement new MolCLR and DVMP by replacing the original encoders in them with the same networks we use, which are denoted as MolCLR$_{CoMPT}$ and DVMP$_{MoleSG}$. We also utilize our non-overlapping masking strategy in DVMP$_{MoleSG}$. ### 4.2 Results of Molecular Property Prediction Table 1 presents the test results in classification tasks. It can be observed that MoleSG consistently outperforms other methods across all eight datasets, demonstrating its effectiveness. It’s worth noticing that though the Toxcast dataset benchmark with 617 binary classification tasks is challenging, our method still performs better than the current SOTA method KANO. Complementary information Table 1: Performance of different models on eight classification benchmarks in physiology and biophysics. The mean and standard deviation of ROC-AUC (%) from three independent runs are reported. (Higher values indicate better performance.) | Category | Physiology | Biophysics | |----------|------------|------------| | Dataset | BBBP | Tox21 | ToxCast | SIDER | ClinTox | BACE | MUV | HIV | | Molecules| 2039 | 7831 | 8575 | 1427 | 1478 | 1513 | 93807 | 41127 | | Tasks | 1 | 12 | 617 | 27 | 2 | 1 | 17 | 1 | | MPNN | 91.3±4.1 | 80.8±2.4 | 69.1±3.0 | 59.5±3.0 | 87.9±5.4 | 81.5±1.0 | 75.7±1.3 | 77.0±1.4 | | DMPNN | 91.9±3.0 | 75.9±0.7 | 63.7±0.2 | 57.0±0.7 | 90.6±0.6 | 85.2±0.6 | 78.6±1.4 | 77.1±0.5 | | CMPNN | 92.7±1.7 | 80.1±1.6 | 70.8±1.3 | 61.6±0.3 | 89.8±0.8 | 86.7±0.2 | 79.0±2.0 | 78.2±2.2 | | CoMPT | 96.1±0.4 | 84.5±0.7 | 72.2±0.8 | 66.1±0.9 | 97.3±2.5 | 94.1±3.6 | 82.6±1.6 | 86.4±1.2 | | N-Gram | 91.2±0.3 | 76.9±2.7 | - | 63.2±0.5 | 87.5±2.7 | 79.1±1.3 | 76.9±0.7 | 78.7±0.4 | | PretrainGNN | 70.8±1.5 | 78.7±0.4 | 65.7±0.6 | 62.7±0.8 | 72.6±1.5 | 84.5±0.7 | 81.3±2.1 | 79.9±0.7 | | MGSSL | 70.5±1.1 | 76.4±0.4 | 64.1±0.7 | 61.8±0.8 | 80.7±2.1 | 79.7±0.8 | 78.7±1.5 | 79.5±1.1 | | GEM | 88.8±0.4 | 78.1±0.4 | 68.6±0.2 | 63.2±1.5 | 90.3±0.7 | 87.9±1.1 | 75.3±1.5 | 81.3±0.3 | | GROVER | 86.8±2.2 | 80.3±2.0 | 56.8±3.4 | 61.2±2.5 | 70.3±13.7 | 82.4±3.6 | 67.3±1.8 | 68.2±1.1 | | GraphMVP | 72.4±1.6 | 75.9±0.5 | 63.1±0.4 | 63.9±1.2 | 79.1±2.8 | 81.2±0.9 | 77.7±0.6 | 77.0±1.2 | | Uni-mol | 72.9±0.6 | 79.6±0.5 | 69.6±0.1 | 65.9±1.3 | 91.9±1.8 | 85.7±0.2 | 82.1±1.3 | 80.8±0.3 | | DVMP | 77.8±0.3 | 79.1±0.4 | - | 69.8±0.6 | 95.6±0.7 | 89.4±0.8 | - | 81.4±0.4 | | DVMPMoleSG | 80.9±2.1 | 84.4±1.2 | 73.3±0.9 | 66.9±1.2 | 98.4±2.0 | 93.5±2.8 | 80.9±2.1 | 87.6±1.8 | | MolCLR | 73.3±1.0 | 74.1±5.3 | 65.9±2.1 | 61.2±3.6 | 89.8±2.7 | 82.8±0.7 | 78.9±2.3 | 77.4±0.6 | | MolCLRCoMPT | 97.2±0.2 | 82.4±1.8 | 72.7±0.5 | 57.1±8.7 | 77.0±14.5 | 85.5±0.9 | 75.8±15.0 | 81.8±2.2 | | KANO | 96.0±1.6 | 83.7±1.3 | 73.2±1.6 | 65.2±0.8 | 94.4±0.3 | 93.1±2.1 | 83.7±2.3 | 85.1±2.2 | | MoleSG | 97.9±0.3 | 85.0±1.2 | 74.2±0.5 | 70.0±0.2 | 99.1±0.9 | 95.1±2.1 | 85.1±0.8 | 87.7±1.9 | Table 2: Performance of different models on six regression benchmarks in physical chemistry and quantum mechanics. The mean and standard deviation of root mean square error (RMSE) (for ESOL, FreeSolv, and Lipophilicity) or mean absolute error (MAE) (for QM7, QM8, and QM9) from three independent runs are reported. (Lower values indicate better performance.) | Category | Physical chemistry | Quantum mechanics | |----------|--------------------|-------------------| | Dataset | ESOL | FreeSolv | Lipophilicity | QM7 | QM8 | QM9 | | Molecules| 1128 | 642 | 4200 | 6830 | 21786 | 133885 | | Tasks | 1 | 1 | 1 | 1 | 12 | 3 | | MPNN | 1.167±0.043 | 1.621±0.952 | 0.672±0.051 | 111.4±0.9 | 0.0148±0.001 | 0.00522±0.00003 | | DMPNN | 1.050±0.008 | 1.673±0.082 | 0.683±0.016 | 103.5±8.6 | 0.0156±0.001 | 0.00514±0.00001 | | CMPNN | 0.798±0.112 | 1.570±0.442 | 0.614±0.029 | 75.1±3.1 | 0.0153±0.002 | 0.00405±0.00002 | | CoMPT | 0.643±0.051 | 0.970±0.207 | 0.572±0.058 | 32.7±7.4 | 0.0120±0.001 | 0.00353±0.00067 | | N-Gram | 1.100±0.030 | 2.510±0.191 | 0.880±0.121 | 125.6±1.5 | 0.0320±0.003 | 0.00964±0.00031 | | PretrainGNN | 1.100±0.006 | 2.764±0.002 | 0.739±0.003 | 113.2±0.6 | 0.0215±0.001 | 0.00992±0.00004 | | GEM | 0.813±0.028 | 1.748±0.114 | 0.674±0.022 | 60.0±2.7 | 0.0163±0.001 | 0.00562±0.00007 | | GROVER | 1.423±0.288 | 2.947±0.615 | 0.823±0.010 | 91.3±1.9 | 0.0182±0.001 | 0.00719±0.00208 | | Uni-mol | 0.788±0.029 | 1.480±0.048 | 0.603±0.010 | 41.8±0.2 | 0.0156±0.000 | - | | DVMP | 0.817±0.024 | 1.952±0.061 | 0.653±0.002 | 74.4±1.2 | 0.0171±0.004 | - | | DVMPMoleSG | 0.669±0.114 | 0.942±0.110 | 0.594±0.018 | 30.2±3.0 | 0.0123±0.001 | 0.00323±0.00006 | | MolCLR | 1.113±0.023 | 2.301±0.247 | 0.789±0.009 | 90.9±1.7 | 0.0185±0.013 | 0.00480±0.00003 | | MolCLRCoMPT | 0.849±0.062 | 1.135±0.163 | 0.657±0.012 | 32.7±2.8 | 0.0141±0.001 | 0.00350±0.00000 | | KANO | 0.670±0.019 | 1.142±0.258 | 0.566±0.007 | 56.4±2.8 | 0.0123±0.000 | 0.00320±0.00001 | | MoleSG | 0.599±0.067 | 0.932±0.131 | 0.545±0.014 | 29.6±2.9 | 0.0117±0.001 | 0.00313±0.00006 | Table 3: Comparison of our approach with two single-modality pre-training approaches on classification tasks. The mean and standard deviation of ROC-AUC (%) over three independent runs are reported. (Higher values indicate better performance.) | | BBBP | Tox21 | ToxCast | SIDER | Clintox | BACE | MUV | HIV | |----------------|--------|--------|---------|--------|---------|--------|--------|--------| | SMILES scratch | 63.6±4.3 | 75.5±0.5 | 64.2±2.5 | 54.0±2.4 | 88.1±6.3 | 79.2±6.6 | 63.6±4.3 | 72.7±3.5 | | SMILES pre-train | 61.5±4.9 | 77.6±2.5 | 66.8±0.9 | 55.0±3.1 | 93.3±2.8 | 83.8±0.9 | 61.5±4.9 | 75.1±2.5 | | Ours SMILES | **65.3±3.1** | **77.9±2.5** | **67.0±0.9** | **59.6±3.8** | **94.3±2.0** | **85.3±1.1** | **65.3±3.1** | **77.3±0.7** | | | Graph scratch | Graph pre-train | Ours graph | |----------------|---------------|-----------------|------------| | SMILES scratch | 96.1±0.4 | 84.5±0.7 | 72.2±0.8 | | SMILES pre-train | 96.8±1.8 | 84.2±0.1 | 72.6±1.0 | | Ours SMILES | **97.9±0.3** | **85.0±1.2** | **74.2±0.5** | Table 4: Comparison of our approach with two single-modality pre-training approaches on regression tasks. The mean and standard deviation of RMSE or MAE over three independent runs are reported. (Lower values indicate better performance.) | | ESOL | Freesolv | Lipophilicity | QM7 | QM8 | QM9 | |----------------|------------|------------|---------------|---------|---------|---------| | SMILES scratch | 0.946±0.226 | 2.581±0.286 | 1.028±0.030 | 160.2±6.8 | 0.0146±0.001 | 0.01017±0.00045 | | SMILES pre-train | 1.030±0.336 | 1.942±0.450 | 1.034±0.015 | 159.3±5.7 | 0.0141±0.001 | 0.01080±0.00010 | | Ours SMILES | **0.873±0.172** | **1.889±0.590** | **0.964±0.036** | **155.7±3.9** | **0.0139±0.001** | **0.00973±0.00059** | | | Graph scratch | Graph pre-train | Ours graph | |----------------|---------------|-----------------|------------| | SMILES scratch | 0.643±0.051 | 0.970±0.207 | 0.572±0.058 | | SMILES pre-train | 0.635±0.104 | 0.939±0.225 | 0.585±0.031 | | Ours SMILES | **0.599±0.067** | **0.932±0.131** | **0.545±0.014** | of the two modalities in MoleSG contributes to outstanding results, surpassing methods injecting additional 3D information. Table 2 shows the test results in regression tasks. We can observe that MoleSG achieves the best scores among both supervised and self-supervised pre-training models, with a relative improvement of 14.4% over KANO across all six regression tasks. MoleSG greatly benefits tasks with limited label information, achieving a 18.4% improvement over KANO on the small dataset FreeSolv, which contains only 642 labeled molecules. Moreover, it is worth noting that our method still outperforms MolCLRCoMPT, which is a version of the typical contrastive learning method MolCLR with the same encoder as ours, verifying the superiority of our method. We also compare with another contrastive learning competitor DVMPMoleSG, which utilizes the same encoders as ours. In addition, both MolCLRCoMPT and DVMPMoleSG outperform their original counterpart MolCLR and DVMP in most tasks, demonstrating the effectiveness of the corresponding strategies proposed in this paper. ### 4.3 Ablation Experiments #### 4.3.1 Single-modality vs. Multi-modality To further reveal the superiority of our method, we compare our multi-modality pre-training with single-modality pre-training. The results are shown in Table 3 and Table 4. Our method successfully achieves the best performance on all downstream tasks. Moreover, it is worth noting that single modality pre-training may cause performance degradation. However, by fully leveraging the complementary information among different modalities, our method can improve performance on all downstream tasks, showing more potential for practical applications. We present visualization results of our method’s feature extraction capability in Appendix D. #### 4.3.2 Overlap vs. Non-overlap To validate whether our non-overlapping masking strategy benefits pre-training, we conduct experiments on different overlap ratios on all downstream tasks. We define overlap ratio as a metric measuring the proportion of jointly masked atoms in both modality inputs. We conduct experiments at overlap ratios at 0%, 25%, 50%, 75%, and 100% across all benchmarks, where our non-overlapping masking strategy is equivalent to setting the overlap ratio to 0. The experimental results shown in Figure 4 indicate that the performance on downstream tasks is the best when the overlap ratio is 0. ### 4.3.3 WITH vs. WITHOUT BACKBONE As analyzed above, fine-tuning both the encoder and backbone may cause suboptimal performance due to the inconsistent distributions. Therefore, we conduct an experiment to validate it. Specifically, section 4.3.1 has shown that the graph encoder has better performance than the SMILES encoder. Therefore, we only consider two combinations in this section. The former is fine-tuning a single graph encoder, and the other is fine-tuning both the graph encoder and the backbone. We perform experiments on all benchmarks, and the results are shown in Table 5 and Table 6. The results show that using only the graph encoder achieves higher performance in all tasks. #### Table 5: Comparison of results on classification tasks with and without the backbone network. The mean and standard deviation of ROC-AUC (%) from three independent runs are reported. | | BBBP | Tox21 | ToxCast | SIDER | ClinTox | BACE | MUV | HIV | |------------------|----------|----------|----------|----------|----------|----------|----------|----------| | Graph encoder+backbone | 97.23±0.6 | 84.8±1.8 | 73.6±0.9 | 65.6±0.4 | 98.8±0.6 | 89.7±5.2 | 81.9±1.9 | 85.8±1.4 | | Graph encoder | **97.9±0.3** | **85.0±1.2** | **74.2±0.5** | **70.0±0.2** | **99.1±0.9** | **95.1±2.1** | **85.1±0.8** | **87.7±1.9** | #### Table 6: Comparison of results on regression tasks with and without the backbone network. The mean and standard deviation of RMSE (or MAE) from three independent runs are reported. | | ESOL | FreeSolv | Lipophilicity | QM7 | QM8 | QM9 | |------------------|----------|----------|--------------|----------|----------|----------| | Graph encoder+backbone | 0.661±0.011 | 0.988±0.250 | 0.560±0.017 | 31.9±3.8 | 0.0119±0.001 | 0.00353±0.00015 | | Graph encoder | **0.599±0.067** | **0.932±0.131** | **0.545±0.014** | **29.6±2.9** | **0.0117±0.001** | **0.00313±0.00006** | ## 5 CONCLUSION In this study, we address the challenges of learning fine-grained information from two complementary modalities: SMILES and graph. To better capture rich molecular features from the interaction between these two modalities, we design a simple and efficient multi-modality pre-training framework called MoleSG, which utilizes a unified feature processing network to fuse both modalities. In addition, we propose a non-overlapping masking strategy to facilitate information exchange between the two modalities. Extensive experiments on 14 downstream tasks show that our method achieves new SOTA performance. Our non-overlapping masking strategy has the potential to be used in other masked reconstruction-based multi-modality pre-training studies. REFERENCES Lorenz C Blum and Jean-Louis Reymond. 970 million druglike small molecules for virtual screening in the chemical universe database gdb-13. *Journal of the American Chemical Society*, 131(25):8732–8733, 2009. Nathan Brown, Marco Fiscato, Marwin HS Segler, and Alain C Vaucher. Guacamol: benchmarking models for de novo molecular design. *Journal of chemical information and modeling*, 59(3):1096–1108, 2019. Jianwen Chen, Shuangjia Zheng, Ying Song, Jiahua Rao, and Yuedong Yang. Learning attributed graph representations with communicative message passing transformer. *arXiv preprint arXiv:2107.08773*, 2021. Seyone Chithrananda, Gabriel Grand, and Bharath Ramsundar. Chemberta: large-scale self-supervised pretraining for molecular property prediction. *arXiv preprint arXiv:2010.09885*, 2020. Laurianne David, Amol Thakkar, Rocío Mercado, and Ola Engkvist. Molecular representations in ai-driven drug discovery: a review and practical guide. *Journal of Cheminformatics*, 12(1):1–22, 2020. John S Delaney. Esol: estimating aqueous solubility directly from molecular structure. *Journal of chemical information and computer sciences*, 44(3):1000–1005, 2004. David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alán Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. *Advances in neural information processing systems*, 28, 2015. Xiaomin Fang, Lihang Liu, Jieqiong Lei, Donglong He, Shanzhuo Zhang, Jingbo Zhou, Fan Wang, Hua Wu, and Haifeng Wang. Geometry-enhanced molecular representation learning for property prediction. *Nature Machine Intelligence*, 4(2):127–134, 2022. Yin Fang, Qiang Zhang, Ningyu Zhang, Zhuo Chen, Xiang Zhuang, Xin Shao, Xiaohui Fan, and Huajun Chen. Knowledge graph-enhanced molecular contrastive learning with functional prompt. *Nature Machine Intelligence*, pp. 1–12, 2023. Anna Gaulton, Louisa J Bellis, A Patricia Bento, Jon Chambers, Mark Davies, Anne Hersey, Yvonne Light, Shaun McGlinchey, David Michalovich, Bissan Al-Lazikani, et al. Chembl: a large-scale bioactivity database for drug discovery. *Nucleic acids research*, 40(D1):D1100–D1107, 2012. Kaitlyn M Gayvert, Neel S Madhukar, and Olivier Elemento. A data-driven approach to predicting successes and failures of clinical trials. *Cell chemical biology*, 23(10):1294–1301, 2016. Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In *International conference on machine learning*, pp. 1263–1272. PMLR, 2017. Lowell H Hall, Brian Mohney, and Lemont B Kier. The electrotopological state: structure information at the atomic level for molecular graphs. *Journal of chemical information and computer sciences*, 31(1):76–82, 1991. Thomas Hartung. Toxicology for the twenty-first century. *Nature*, 460(7252):208–212, 2009. Zhenyu Hou, Xiao Liu, Yukuo Cen, Yuxiao Dong, Hongxia Yang, Chunjie Wang, and Jie Tang. Graphmae: Self-supervised masked graph autoencoders. In *Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining*, pp. 594–604, 2022. Fan Hu, Yishen Hu, Weihong Zhang, Huazhen Huang, Yi Pan, and Peng Yin. A multimodal protein representation framework for quantifying transferability across biochemical downstream tasks. *Advanced Science*, pp. 2301223, 2023. Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. Strategies for pre-training graph neural networks. *arXiv preprint arXiv:1905.12265*, 2019.
P895PSh41Z
* Although it is repeatedly mentioned that one of the primary motivations for avoiding a model-based algorithm is that they struggle with stochastic environments, all the experiments are conducted on deterministic tasks.
Relaxed State-Adversarial Offline Reinforcement Learning: A Leap Towards Robust Model-Free Policies from Historical Data Anonymous authors Paper under double-blind review Abstract Offline reinforcement learning (RL) targets the development of top-tier policies from historical data, eliminating the need for environmental interactions. While many prior studies have focused on model-based RL strategies, we present the Relaxed State-Adversarial Offline RL (RAORL), an innovative model-free offline RL solution. RAORL sidesteps model uncertainty issues by framing the problem within a state adversarial context, eliminating the need for explicit environmental modeling. Our method guarantees the policy’s robustness and its capability to adapt to varying transition dynamics. Anchored in robust theoretical foundations, RAORL promises performance guarantees and presents a conservative value function that reflects average-case outcomes over an uncertainty set. Empirical evaluations on established offline RL benchmarks indicate that RAORL not only meets but frequently surpasses the performance of state-of-the-art methods. 1 Introduction Reinforcement Learning (RL) is foundational for tackling sequential decision-making tasks. While online RL flourishes in simulations via direct environment engagement (Mnih et al., 2015; Silver et al., 2017), its transition to real-world scenarios often encounters logistical and financial problems during data collection. This becomes notably evident in critical areas such as healthcare and robotics. Conversely, offline RL presents a viable solution, using existing datasets to train policies without ongoing environmental interaction (Levine et al., 2020; Lange et al., 2012). In RL, environmental interactions are vital for policies to investigate diverse states and assess action outcomes. However, in offline RL, where data is pre-gathered, there’s a glaring obstacle: the dataset might not wholly represent the environment’s intricacies. This leads to the potential discrepancy between the dataset’s transition probabilities, $P_B(s'|s,a)$, and the true probabilities, $P(s'|s,a)$. Such deviations parallel the issues in robust RL, where there can be differences between simulated and real-world transition probabilities (Fujimoto et al., 2019). Given these parallels, we employ robust RL techniques to mitigate offline RL’s intrinsic challenges. Robust RL approaches address state transition probability deviations by finding a policy to perform best in the worst-case environment over a set of possible MDPs. It’s logical to merge robust RL with model-based RL to tackle offline RL issues. Specifically, model-based RL methods develop an designed environment model from the dataset, allowing policy interaction during training. Despite recent advancements in model-based offline RL incorporating pessimistic dynamic models to handle model uncertainties (Rigter et al., 2022), these methods still face challenges when simulating stochastic environments (Antonoglou et al., 2022; Ozair et al., 2021). They can also introduce model errors and demand intricate hyperparameter tuning, raising questions about the reliability of synthetic samples (Van Hasselt et al., 2019; Lu et al., 2022; Yu et al., 2021). Conversely, applying online robust RL strategies to offline situations using model-free methods is complex. State perturbations might lead to out-of-distribution observations, resulting in exaggerated value function overestimations (Yang et al., 2022). This raises a critical question: Can robust RL principles be effectively embedded within offline RL using a model-free approach? We present the Relaxed State-Adversarial Offline Reinforcement Learning (RAORL) algorithm, a novel approach to model-free offline RL, to answer the question. RAORL formulates the policy... challenge as a state-adversarial optimization problem (Lien et al., 2023), underpinned by a reward correction term that portrays an average-case scenario across an uncertainty set. Utilizing this relaxed state-adversarial optimization paradigm allows us to adeptly tackle the robust policy challenge, ensuring an offline, tractable optimization without online interactions. Accordingly, RAORL stands out for its ability to: 1) Measure and bridge the performance gap in real-world applications without online engagement, 2) Reduce dependency on precise transition model learning, and 3) Integrate seamlessly with established model-free offline RL methods like TD3+BC (Fujimoto & Gu, 2021) and ReBrac (Tarasov et al., 2023). Through the D4RL benchmark (Fu et al., 2020), our evaluations validate RAORL’s efficacy. It consistently surpasses baseline methods in various continuous-control tasks. With its empirical effectiveness and theoretical foundation, RAORL emerges as a top contender for risk-sensitive applications. We will release our code to the public upon the paper’s acceptance. 2 RELATED WORK 2.1 MODEL-BASED OFFLINE REINFORCEMENT LEARNING Model-based RL methods learn a model environment and subsequently generate synthetic data to optimize a policy. When training on synthetic data, they strive to enhance generalization (Ball et al., 2021; Wang et al., 2021). Since synthetic data may not be trustable, model-based methods typically employ uncertainty measures to regulate their model (Yang et al., 2021; Yu et al., 2020). For instance, MOReL (Kidambi et al., 2020) employs an ensemble of dynamics models to measure model uncertainty, yet the reliability of these estimates remains questionable. Meanwhile, COMBO (Yu et al., 2021), a method akin to CQL (Kumar et al., 2020), learns a Gaussian distribution over upcoming states and rewards via maximum log-likelihood. Despite most model-based offline RL approaches leveraging maximum likelihood estimates (Argenson & Dulac-Arnold, 2021; Matsushima et al., 2021), alternative strategies exist. They focus on model learning tailored for offline policy optimization, emphasizing accuracy under policy-induced state-action distributions (Lee et al., 2020; Rajeswaran et al., 2020; Hishinuma & Senda, 2021). A study closely aligned with our work is (Rigter et al., 2022), which delves into the maximin formulation of offline RL. While their methodology is model-based, it unavoidably inherits the limitations of such a formulation. Particularly, model-based methods grapple with challenges in modeling stochastic environments, as emphasized by Antonoglou et al. (2022) and Ozair et al. (2021). The potential for additional model errors, intricate hyperparameter tuning, and concerns about synthetic sample authenticity further complicate the policy training (Van Hasselt et al., 2019; Lu et al., 2022; Yu et al., 2021). In contrast, our approach generates pessimistic synthetic transitions without relying on model environments. This model-free perspective offers a distinctive avenue for offline RL, sidestepping the inherent challenges and complexities associated with model-based methods. 2.2 MODEL-FREE OFFLINE REINFORCEMENT LEARNING Model-free offline RL is unique because policies do not interact with environments during training. This domain has spawned several approaches, including policy constraint methods, importance sampling, regularization, and uncertainty estimation. Specifically, policy constraint techniques ensure that the learned policy closely aligns with the behavior policy derived from the dataset. They fall into two groups: direct (Fujimoto et al., 2019; Kostrikov et al., 2021; Wu et al., 2020) and implicit (Kumar et al., 2019; Fujimoto & Gu, 2021; Wang et al., 2020), depending on their use of a model to represent the behavior policy. Importance sampling methods in offline RL (Nachum et al., 2019; Zhang et al., 2020) re-weight the state-action distribution in the offline dataset. Regularization techniques (Kumar et al., 2020; Yu et al., 2021; Singh et al., 2020) refine the learned function by introducing penalty terms. Lastly, uncertainty-based methods (Agarwal et al., 2020) balance conservative and off-policy RL techniques based on the model’s confidence level. Most prevailing strategies focus on identifying out-of-distribution actions (Yang et al., 2022). However, these models tend to be overly conservative, resulting in a pronounced gap in the generalization capability of RL. Contrarily, our proposed RAORL methodology accentuates model-free transition uncertainty training. This approach seamlessly integrates into existing methods and paves the way for superior generalization capabilities. 2.3 ROBUST REINFORCEMENT LEARNING The Robust MDP techniques aim to optimize rewards, especially under worst-case conditions where testing environments differ from training ones (Nili & El Ghaoui, 2005; Iyengar, 2005; Wiesemann et al., 2013). As dimensionality rises, the intricacy of robust MDP intensifies due to the expanding search space. To address this issue, Tamar et al. (2014) pioneered a dynamic programming approximation, advancing the scalability of the robust MDP model. This was further enhanced by Roy et al. (2017) for nonlinear predictions, ensuring convergence to a localized minimum. Later, research by Wang & Zou (2021); Badrinath & Kalathil (2021) investigated convergence speeds when integrating function approximations under specific conditions. Derman et al. (2021) showed that regularized MDPs, designed to manage uncertain rewards, fall within the domain of robust MDPs. Their focus on regularized MDPs was influenced by the lesser computational demand than conventional robust MDP methods. Additionally, Clement & Kroer (2021) crafted efficient updates using gradient descent to tackle distributionally robust MDP, improving convergence speeds. However, despite these advancements, current model environments remain restrictive for real-world applications. Our methodology resembles the relaxed state-adversarial policy optimization (RAPPO) (Lien et al., 2023), which was a robust RL method in online scenarios. We adapt RAPPO for offline contexts and present a novel formulation to account for deviations between offline datasets and their actual environments. 3 PRELIMINARIES 3.1 NOMINAL MDPs AND OFFLINE DATASET MDPs A nominal Markov Decision Process (MDP) is defined by the tuple: \( M = (S, A, P_0, R, \rho_0, \gamma) \), where \( S, A \) represent the state and action spaces, reward function \( R(s, a) \) lies within the interval \([-R_{\text{max}}, R_{\text{max}}]\), \( P_0(s'|s, a) \) denotes the transition function, \( \rho_0 \) is the initial state distribution, \( \gamma \in (0, 1) \) is the discount factor. We consider Markovian policies, \( \pi \in \Pi \), which map each state to a distribution over actions. The value function, \( V^{\pi}_M(s) = \mathbb{E}_{a_t \sim \pi, s_t \sim P_0} \left[ \sum_{t=0}^{\infty} \gamma^t R(s_t, a_t) \right] \), represents the expected discounted return, and the return where policies starting from an initial state distribution can be written as \( J_{\rho_0}(\pi, P_0) = \sum_{s \in S} \rho_0(s)V^{\pi}_M(s) \). In addition, the state-action value function is defined as \( Q^{\pi}_M(s, a) = \mathbb{E}_{a_t \sim \pi} \left[ R(s, a) + \gamma \sum_{s'} P_0(s'|s, a)Q^{\pi}_M(s', a') \right] \). Within offline RL, the objective centers around optimizing the policy via a static dataset \( B = \{(s_i, a_i, r_i, s'_i)\}_{i=1}^{|\mathcal{B}|} \), which is sourced from a nominal MDP. Given the nominal MDP \( M \) and initial values \( Q(s, a) \), we define the MDP induced by the offline dataset, denoted as \( M_B = (S \cup \{s_{\text{term}}\}, A, P_B, R, \rho_0^B, \gamma) \). This MDP retains the original state and action spaces of \( M \) but consists of extra terminal states, \( s_{\text{term}} \). In this context, the transition probabilities for \( M_B \) are given by \( P_B(s'|s, a) = \frac{N(s, a, s')}{{\sum_{s''} N(s, a, s'')}} \), where \( N(s, a, s') \) is the occurrences of the tuple \((s, a, s')\) within \( B \). If a particular \((s, a)\) is absent from the dataset, implying \( N(s, a, s') = 0 \), then \( P_B(s_{\text{term}}|s, a) = 1 \). In this case, \( r(s, a, s_{\text{term}}) \) is aligned with the preliminary value \( Q(s, a) \) (Fujimoto et al., 2019). 3.2 ROBUST REINFORCEMENT LEARNING Robust RL addresses the challenges faced in traditional RL when the environment is uncertain. Unlike standard RL, robust RL aims to ensure good performance even in the worst-case scenario. This is achieved by learning robust policies resilient to variations in the environment’s dynamics. The fundamental concept behind Robust RL is the uncertainty set \( \mathcal{U} \), which encompasses all possible transition dynamics the agent might encounter. By optimizing the worst-case performance over \( \mathcal{U} \), robust RL ensures that the policy will perform adequately, even if the environment behaves adversarially within the bounds defined by \( \mathcal{U} \). Mathematically, the optimization problem for robust RL can be defined as: \( \pi = \arg \max_{\pi} \min_{P \in \mathcal{U}} J_{\rho}(\pi, P) \), where \( \pi \) is the robust optimal policy that maximizes the minimum value over all possible environments in \( \mathcal{U} \). 4 METHOD The challenges of Offline RL arise because pre-gathered datasets may not fully capture all environmental dynamics. Consequently, there exists a misalignment between the transition probabilities observed in the dataset, \( P_B(s'|s, a) \), and the true transition probabilities, \( P(s'|s, a) \). This situation bears resemblance to the dilemmas faced in robust RL, wherein transition probabilities diverge between simulated and real-world settings. Given these analogous challenges, we advocate for the incorporation of robust RL methodologies to address the issues in Offline RL. Figure 1: The diagram showcases the link between the risk-aware uncertainty set, \( U_r \), which envelopes the transition kernel \( P_B \) of the offline dataset, and the comprehensive uncertainty set \( U \) encompassing all viable transition kernels. Clearly, \( U_r \) is a subset of \( U \). The subsequent sections will navigate through the challenges of offline RL. Initially, we define the average deviation between every feasible real-world transition kernel \( P \in U \) and the transition kernel induced by the offline dataset \( P_B \), and then explore the performance lower bound on the offline dataset in Section 4.1. Following this, Section 4.2 demonstrates how adopting a risk-aware policy – one that optimizes for the average scenario within an uncertainty set \( U_r \), determined from \( P_B \), can improve the performance lower bound, although this predefined set may not perfectly match the real-world uncertainties. The relations between \( U, U_r, \) and \( P_B \) is illustrated in Figure 1. In Section 4.3, we detail the use of the relaxed state-adversarial method to further increase the performance lower bound. This method aids in delineating the uncertainty set and optimizes the policy’s average performance in a model-free context. 4.1 OFFLINE DATASET DEVIATION To capture the uncertainty gap between reality and offline datasets, we consider the expectation of the offline dataset deviation by the following definition. **Definition 1** (Expectation of Offline Dataset Deviation). Given an offline dataset transition kernel \( P_B \), we introduce a universal uncertainty set \( U \) to account for all feasible transition kernels in the real environment. This is mathematically captured by: \[ \mathbb{E}_{P_0 \sim U} [\mathbb{E}_{s,a} D_{TV}(P_0, P_B)] \leq \beta, \] where \( \beta \geq 0 \) and \( D_{TV} \) denotes the Total Variation Distance. Consider the offline dataset MDP \( M_B \) defined as \( (S \cup \{ s_{term} \}, A, P_B, R, \rho^B_0, \gamma) \) from which any policy \( \pi \) is derived. Let \( V \) represent the set of unknown state-action pairs, such that \( (s, a) \in V \) if and only if \( (s, a) \) is not present in the offline dataset. The term \( T^\pi_V \) represents the time taken to encounter these unknown states. With these definitions in place, we can present the offline dataset reality gap as follows: **Lemma 1** (Reality Gap): Performance Gap between Offline Dataset and Reality (the universal uncertainty set)). The value of any policy \( \pi \) learned from \( P_B \) on the universal uncertainty set \( U \) and the induced offline dataset transition kernel \( P_B \) satisfies: \[ |J_{\rho^B_0}(\pi, P_B) - \mathbb{E}_{P_0 \sim U}(J_{\rho_0}(\pi, P_0))| \leq \frac{2R_{max}}{1 - \gamma} \mathbb{E}_{P_0 \sim U}[D_{TV}(\rho_0, \rho^B_0)] + \frac{2\gamma R_{max}}{(1 - \gamma)^2} \beta + \frac{2R_{max}}{1 - \gamma} \mathbb{E}_{P_0 \sim U}[\gamma^{T^\pi_V}]. \] (1) The detailed proof can be found in Appendix A.1. Using Lemma 1, we can establish a lower bound on the performance of the offline dataset in relation to the optimal policy \( \pi^* \): Theorem 1 (Offline Dataset Performance Lower Bound). For any $\epsilon_\pi$ sub-optimal policy, we have: $$\mathbb{E}_{P_0 \sim U}[J_{\rho_0}(\pi^*, P_0)] - \mathbb{E}_{P_0 \sim U}[J_{\rho_0}(\pi, P_0)] \leq \epsilon_\pi + \frac{4R_{\text{max}}}{1 - \gamma} \mathbb{E}_{P_0 \sim U}[D_{\text{TV}}(\rho_0, \rho_0^B)] + \frac{4\gamma R_{\text{max}}}{(1 - \gamma)^2} \beta$$ $$+ \frac{2R_{\text{max}}}{1 - \gamma} \mathbb{E}_{P_0 \sim U}[\mathbb{E}[\gamma T_{\tilde{\nu}}]] + \frac{2R_{\text{max}}}{1 - \gamma} \mathbb{E}_{P_0 \sim U}[\mathbb{E}[\gamma T_{\tilde{\nu}}^*]].$$ (2) The proof is in Appendix A.1. Theorem 1 highlights several challenges inherent in offline RL algorithms. First, the optimization error term, $\epsilon_\pi$, can be minimized by allocating more computational resources. Second, the distribution shift term $\beta$ represents the uncertainty in real-world dynamics. The other two terms are indicative of the offline dataset’s comprehensiveness. Following these insights, we’ll discuss how a risk-aware policy can enhance performance lower bound. 4.2 Risk-Aware Policy Theorem 1 points out the challenges of applying offline dataset insights to real-world dynamics. Contrary to Robust RL strategies, which primarily target the worst-case scenarios, our approach focuses on a risk-aware policy concerning an average case. This policy operates over a designated risk-aware uncertainty set $U_r$, where the expectation $\mathbb{E}_{P \sim U_r}(\mathbb{E}_{s,a} D_{\text{TV}}(P, P_B))$ is constrained by $\leq \beta_r$. This can be formulated as: $\pi_r = \arg\max_\pi \mathbb{E}_{P \sim U_r} J_{\rho}(\pi, P)$. The subsequent Theorem 2 proves that leveraging a risk-aware policy enhances the policy’s performance lower bound. Theorem 2 (Risk-Aware Policy Performance Lower Bound). For an $\epsilon_{\pi_r}$ sub-optimal risk-aware policy, we have: $$\mathbb{E}_{P_0 \sim U}[J_{\rho_0}(\pi^*, P_0)] - \mathbb{E}_{P_0 \sim U}[J_{\rho_0}(\pi_r, P_0)] \leq \epsilon_{\pi_r} + \frac{4R_{\text{max}}}{1 - \gamma} \mathbb{E}_{P_0 \sim U}[D_{\text{TV}}(\rho_0, \rho_0^B)]$$ $$+ \frac{4\gamma R_{\text{max}}}{(1 - \gamma)^2} (\beta - \frac{1}{2} p_r \beta_r) + \frac{2R_{\text{max}}}{1 - \gamma} \mathbb{E}_{P_0 \sim U}[\mathbb{E}[\gamma T_{\tilde{\nu}}]] + \frac{2R_{\text{max}}}{1 - \gamma} \mathbb{E}_{P_0 \sim U}[\mathbb{E}[\gamma T_{\tilde{\nu}}^*]].$$ (3) The detailed proof is available in Appendix A.2. In Theorem 2, the component $p_r \beta_r$ signifies the reduced uncertainty associated with the risk-aware policy $\pi_r$. In essence, $p_r \beta_r$ captures the fraction of the total uncertainty $\beta$, addressed by this risk-aware policy. As a result, the term $(\beta - \frac{1}{2} p_r \beta_r)$ reflects the remaining uncertainty after implementing the risk-aware policy. Even though model-based strategies are prevalent in robust offline RL, mainly due to concerns related to out-of-distribution samples, we elaborate on a distinct model-free, risk-aware policy tailored for offline datasets in the subsequent sections. 4.3 Model-Free Risk-Aware Policy Implementation To obtain a more resilient model, we use the relaxed state-adversarial approach to account for uncertainties and potential adversarial situations in an offline dataset. Our decision to employ the surrogate perturbation method is primarily driven by two reasons: (1) it facilitates the generation of adversarial examples without necessitating an auxiliary estimated model, and (2) it is inherently suited for stochastic environments, a setting where model-based methods fall short (Antonoglou et al., 2022; Ozair et al., 2021). In essence, state-adversarial perturbation shifts current states towards neighboring states with minimal values. This shift is characterized by a state-adversarial transition kernel that bridges the standard MDP with the adversarial MDP. For clarity, let’s define the $\sigma$-neighborhood of any state $s \in S$ as $N_\sigma(s) = \{s' | d(s, s') \leq \sigma\}$, where $d(s, s')$ is a distance metric. In our work, we employ the $L_\infty$-norm, denoted as $\| \cdot \|_\infty$. Definition 2 (Matrix of State Perturbations for Offline Dataset). Consider an MDP characterized by the transition kernel $P_B$ derived from an offline dataset, a given policy $\pi$, and a perturbation measure $\sigma \geq 0$. For every state pair $i, j \in S$, we define the matrix of state perturbations $Z^\pi_\sigma$ corresponding to $\pi$ as: $$Z^\pi_\sigma(i, j) = \begin{cases} 1, & \text{if } j = \arg\min_{s \in N_\sigma(i)} V^\pi(s | P_B), \\ 0, & \text{otherwise}. \end{cases}$$ (4) The matrix identifies, for each state \( i \), a neighboring state \( j \) that has the lowest value function \( V^\pi \), highlighting the least favorable outcomes of every state. As noted by Lien et al. (2023), the arg min in Equation 4 can be efficiently determined using the fast gradient sign method (FGSM) (Goodfellow et al., 2015) within continuous state domains. Given a value function \( V \) characterized by parameter \( \phi \), a state \( s \), and a perturbation magnitude \( \epsilon \), FGSM identifies the disturbed state \( \Gamma(s) = s - \epsilon \times \text{sign}(\nabla_s V(\phi, s)) \) with the lowest value. Here, \( \|s - \Gamma(s)\| \leq \epsilon \), and the gradient at \( s \) is derived via back-propagation. Subsequently, the state value is iteratively updated using \( V(s) = r(s, a) + \gamma V(\Gamma(s')) \). This approach eliminates the need to adjust the environment, contrasting with model-based algorithms. **Definition 3** (Offline Dataset’s State-Adversarial MDP). Given a policy \( \pi \), its associated state-adversarial MDP is characterized by the tuple \((S, A, P^\pi_\sigma, R, \mu, \gamma)\). The specific state-adversarial transition kernel for the offline dataset, \( P^\pi_\sigma \), is expressed as \[ P^\pi_\sigma(\cdot | s, a) = [Z^\pi_\sigma]^\top P_B(\cdot | s, a), \quad \forall (s, a) \in S \times A. \] This transition kernel is biased towards the worst-case outcomes identified by the state perturbation matrix. Using the state-adversarial MDP \( P^\pi_\epsilon \) usually helps improve the performance in the worst-case results (Kuang et al., 2022). However, setting too high a value for \( \epsilon \) can result in overly cautious strategies (Lien et al., 2023). This emphasizes the importance of considering a spectrum of perturbation levels through the subsequent definition of the uncertainty set. **Definition 4** (Offline Dataset’s Uncertainty Set). Given a perturbation radius \( \epsilon > 0 \), the uncertainty set of \( P_B \) is defined as \[ U^\pi_\epsilon := \{ P^\pi_\sigma : P^\pi_\sigma = [Z^\pi_\sigma]^\top P_B \text{ and } \sigma \leq \epsilon \}. \] This uncertainty set captures all potential transition kernels under state-adversarial perturbations within the \( \epsilon \) radius. The aim is to design a policy that remains robust against average-case scenarios within this set, which can be represented using the following relaxed state-adversarial transition kernel. **Relaxed State-Adversarial Transition Kernel for Offline Dataset.** For given parameters \( \epsilon > 0 \) and \( \alpha \in [0, 1] \), we define the \( \alpha \)-relaxed state-adversarial transition kernel as a weighted combination of the usual and state-adversarial transition kernels as: \[ P^{\pi,\alpha}_\epsilon(\cdot | s, a) = \alpha P_B(\cdot | s, a) + (1 - \alpha) P^\pi_\epsilon(\cdot | s, a). \] Such a kernel achieves a deliberate balance, rendering the policy both suitable for real-world applications and resilient to unexpected disturbances. Subsequently, we demonstrate that \( \alpha \) can be effectively interpreted as optimizing average-case scenarios within a relaxed state-adversarial transition kernel (Lien et al., 2023). **Lemma 2** (Relaxation parameter \( \alpha \) as a prior distribution \( D \) over uncertainty set \( U^\pi_\epsilon \)). For any distribution \( D \) over the state-adversarial uncertainty set \( U^\pi_\epsilon \), there must exist an \( \alpha \in [0, 1] \) such that \[ \mathbb{E}_{P \sim D}[J(\pi | P)] = J(\pi | P^{\pi,\alpha}_\epsilon). \] Emphasizing the significance of \( \alpha \), its variations encapsulate unique prior assumptions. Specifically, modulating \( \alpha \) allows us to represent a spectrum of distributions \( D \) and adapt policy training for multiple environments. Through the optimization of a relaxed state-adversarial policy, the performance lower bound is diminished as outlined in the subsequent theorem. **Theorem 3** (Relaxed State-Adversarial Policy Performance Lower Bound). For an \( \epsilon_{\pi_{RA}} \) sub-optimal relaxed state-adversarial Policy policy, we have \[ \mathbb{E}_{P_0 \sim U}[J_{\rho_0}(\pi^*, P_0)] - \mathbb{E}_{P_0 \sim U}[J_{\rho_0}(\pi_{RA}, P_0)] \leq \epsilon_{\pi_{RA}} + \frac{4R_{\max}}{1 - \gamma} \mathbb{E}_{P_0 \sim U}[D_{TV}(\rho_0, \rho^B_0)] \\ + \frac{4\gamma R_{\max}}{(1 - \gamma)^2} (\beta - \frac{1}{2} p_{RA}(1 - \alpha)) + \frac{2R_{\max}}{1 - \gamma} \mathbb{E}_{P_0 \sim U}[\gamma T^{\pi_{RA}}_V] + \frac{2R_{\max}}{1 - \gamma} \mathbb{E}_{P_0 \sim U}[\gamma T^{\pi^*}_V]. \] The proof is in Appendix A.3. Within the framework of the state-adversarial uncertainty set, the term \( p_{RA}(1 - \alpha) \) signifies the reduction in uncertainty achieved by the risk-aware policy \( \pi_{RA} \). More explicitly, \( p_{RA}(1 - \alpha) \) captures the portion of the overarching uncertainty, \( \beta \), that is addressed by the risk-aware policy. As a result, the residual uncertainty, expressed as \( (\beta - \frac{1}{2} p_{RA}(1 - \alpha)) \), provides a measure of uncertainty that remains even after the risk-aware policy’s intervention. 4.4 IMPLEMENTATION DETAILS **Algorithm 1** Relaxed State-Adversarial Offline Reinforcement Learning (RAORL) **Require:** Offline dataset \(\{s_i, a_i, r_i, s'_i, d_i\}_{i=1}^N\), objective function \(J\), step size parameter \(\eta\), number of iterations \(T\), number of update samples \(T_{upd}\), uncertainty set radius \(\epsilon\) 1: Initialize policy \(\pi_{\theta_0}\), value function \(Q_{\phi_0}\) 2: for \(t = 0, \cdots, T - 1\) do 3: Sample a tuple \(\{s_i, a_i, r_i, s'_i\}_{i=1}^{T_{upd}}\) from the offline dataset 4: Compute the corresponding state-adversarial transitions of the offline dataset batch \(\{s_i, a_i, r_i, \arg\min_{s' \in \mathcal{N}_\epsilon(s')} V^\pi(s'|P_B)\}_{i=1}^{T_{upd}}\) by Equation 5 5: Compute the average scenario Bellman target \(J(\pi_{\theta_t}|P_{\theta_{t-1}}, \alpha)\) by Lemma 2 6: Update value function \(Q_{\phi_t}\) 7: \[ Q = \arg\min_Q \mathbb{E}_{(s,a,s') \sim D} \left[ (r(s,a) + \gamma (\alpha Q_{\phi_t}(s', \pi(s')) + (1-\alpha)Q_{\phi_t}(adv(s'), \pi(s'))) - Q_{\phi_t}(s,a) \right]^2 \] 8: Update policy \(\pi_{\theta_t}\) 9: \[ \pi = \arg\max_\pi \mathbb{E}_{(s,a) \sim D} \left[ \lambda Q(s, \pi(s)) - (\pi(s) - a)^2 \right] \] 10: end for Algorithm 1 details the presented method. During each iteration \(t\), the update of policy \(\pi_{\theta_t}\) can be achieved by using any off-the-shelf RL algorithm (e.g., TD3 (Fujimoto et al., 2018)) for optimizing the average-case return \(J(\pi_{\theta_t}|P_{\theta_{t-1}}, \alpha)\). We employ ReBrac (Tarasov et al., 2023) as our base algorithm, retaining its default hyper-parameters. For the relaxed state-adversarial component, we pick \(\epsilon\) from the set \(\{0.03, 0.05, 0.08, 0.1\}\) multiplied by state differences, which denote the absolute disparity between consecutive states. In addition, we determine \(\alpha\) by choosing from the set \(\{0.7, 0.8, 0.9\}\). 5 RESULTS AND EVALUATION We conducted several experiments to evaluate the effectiveness of RAORL. Our objectives are three-fold: 1) **Performance Evaluation:** Comparing the proficiency of RAORL with prevailing state-of-the-art benchmarks, including model-based approaches: RAMBO (Rigter et al., 2022) and COMBO (Kumar et al., 2020), and model-free approaches: S4RL (Sinha et al., 2022), ReBrac (Tarasov et al., 2023), ATAC (Cheng et al., 2022), IQL (Kostrikov et al., 2022), TD3+BC (Fujimoto & Gu, 2021), and CQL (Kumar et al., 2020); 2) **Ablation Study:** Understanding the impact of adversarial training on the algorithm’s effectiveness; and 3) **Robustness Analysis:** Assessing the algorithm’s stability under adversarial conditions. The evaluation spanned multiple environments: **MuJoCo.** We conducted experiments on three distinct robotic environments (HalfCheetah, Hopper, Walker2D), each with three specific datasets (Medium, Medium-Replay, Medium-Expert). **AntMaze.** In this environment, the agent operates a robot with the objective of reaching a designated goal. Unlike MuJoCo, the reward system in AntMaze is sparse, rewarding the agent only upon successful goal attainment. The maze has three configurations (Umaze, Medium, Large), and the datasets vary (Fixed, Play, Diverse) based on the diversity in the starting points and goal locations used during data collection. **Adroit.** This environment pertains to the control of a sophisticated 24-DoF simulated robotic hand. The tasks include hammering a nail, unlocking a door, spinning a pen, and grasping or relocating a ball. For each task, there are two distinct dataset types (cloned, and expert). The datasets are primarily human demonstrations focusing on tasks that demand precision in robotic manipulation. 5.1 PERFORMANCE EVALUATION The experimental results outlined in Table 1 underscore the efficacy of our RAORL approach. While many previous methods demonstrated strong performances on the Mujoco datasets – a relatively simple environment – RAORL secured a marginally higher average reward than baseline techniques. As the environmental difficulty increased, leading methods such as RAMBO (Rigter et al., 2022), Table 1: The performance of RAORL was benchmarked against baseline models, with results averaged across four random seeds. Following the work of (Fu et al., 2020), the scores in this table have been normalized using \((S_o - S_r)/(S_e - S_r)\), where \(S_o\), \(S_r\), and \(S_e\) denote the rewards achieved by the offline policy, random policy, and expert policy. Note that the baseline results were copied from the papers of S4RL, RAMBO, ReBrac, and ATAC. ATAC (Cheng et al., 2022), and TD3+BC (Fujimoto & Gu, 2021), which once dominated in certain Mujoco datasets, encountered a notable decline in their performance. It is worth noting that S4RL (Sinha et al., 2022) employs a comparable adversarial state training approach, propelling states towards their worst nearby states following transitions. However, such a direct application can yield excessively conservative outcomes in practical (Lien et al., 2022), especially real-world scenarios. This necessitates the adoption of a more tempered version of the state-adversarial technique. As illustrated in Table 1, RAORL demonstrated a marked superiority in complex environments like Adroit and AntMaze, noted as some of the most demanding in the D4RL benchmarks (Fu et al., 2020). Furthermore, while S4RL primarily offered an empirical analysis of state-adversarial methods, our research extends this by providing a theoretical foundation for the lower performance bound, thereby reinforcing the validity and effectiveness of employing state adversaries. 5.2 Ablation Study Given that RAORL is built upon ReBrac, we assessed the advantages of introducing a relaxed state adversarial approach to offline RL problems. According to Table 1, ReBrac serves as RAORL minus the relaxed state adversary. RAORL consistently outperformed or matched ReBrac across various environments, with the only exception being the door-cloned dataset, where no approach surpassed a score of 10. Given the extreme difficulty of this particular environment, the differences in scores between methods became less consequential. Excluding this special environment, RAORL demonstrated marked improvements over ReBrac, especially in datasets like pen-cloned, AntMaze Medium-Play, AntMaze Large-Play, AntMaze Medium-Diverse, and AntMaze Large-Diverse. Given the elevated challenge these datasets present compared to others, we deduced that incorporating relaxed state adversaries indeed enhances offline RL performance. We also conducted experiments to determine if RAORL could enhance another foundational algorithm, namely Implicit Q-Learning (IQL). This was done to further substantiate its applicability and effectiveness. Table 2 (left) shows the results of IQL with and without the integration of the relaxed adversarial state technique. These experiments, run on three different seeds, show RAORL’s notable improvements in performance. Table 2: (Left) Experiments on IQL with and without our relaxed state adversarial technique. (Right) Robustness evaluation on Hopper-medium-expert over 4 seeds. | | IQL | IQL+RA | |------------------|---------|----------| | Halfcheetah-medium-expert | 86.7 | 93.3 ± 1.5 | | hopper-medium-expert | 109.6 | 112 ± 1.9 | | walker2d-medium-expert | 91.5 | 112.4 ± 0.6 | | Attack | RAORL | RORL-10 | RORL-2 | |--------|-------|---------|--------| | 0.025 | 72.1 | 75.8 | 48.1 | | 0.05 | 61.2 | 65.0 | 33.2 | | 0.075 | 53.1 | 53.5 | 32.6 | | 0.1 | 41.7 | 44.3 | 29.1 | Figure 2: The blue and red solid lines depict the average performances of RAORL and ReBrac, in the presence of state perturbations. The vertical axis represents the normalized score, while the horizontal axis indicates the perturbation magnitude. The shaded areas illustrate the half standard deviation of the results, given that the experiments were conducted using four different seeds. 5.3 ROBUSTNESS ANALYSIS In offline RL, a common and practical challenge arises when data collected from one system (machine A) is used to train an agent that will be deployed on a different but similar system (machine B). Even minor differences between these two machines can lead to distinct Markov Decision Processes (MDPs), posing a significant challenge in terms of MDP generalization. This situation underscores the importance of developing RL agents that can generalize effectively across varying MDPs. In essence, the agent must be capable of adapting to the nuances and potential discrepancies between the training environment (machine A) and the deployment environment (machine B). Therefore, we evaluated the robustness of policies trained using RAORL and ReBrac against adversarial perturbations in transition states. Specifically, in the evaluation, agents encountered different levels of adversarial perturbation based on their value functions. The perturbations were designed to transition the agent to states that minimize the expected return for actions taken from those states. Figure 2 provides a side-by-side comparison under different perturbation levels for Medium-Expert datasets from the Hopper, Halfcheetah, and Walker2d environments. The results highlight RAORL’s superior resilience over the baseline, emphasizing the benefits of using relaxed state adversaries in offline RL contexts. Moreover, we compare RAORL with RORL (Yang et al., 2022) in robustness experiments because RORL is a model-free state-adversarial method that achieves state-of-the-art performance in the MuJoCo environment. As RORL is an ensemble-based model, we use RORL-n to indicate the use of n ensembled models. Table 2 (right) demonstrates that our method achieves comparable results to RORL-10, which involves an ensemble size five times larger than ours. Note that RAORL notably outperformed RORL-2, where the two models have the same number of critics. 6 CONCLUSIONS We have introduced RAORL, an innovative model-free strategy for offline RL that integrates state-adversarial perturbations, fostering robust policy development based on pre-collected datasets. Theoretically, RAORL offers a performance lower bound, showcasing resilience to discrepancies between the datasets and actual environments. Impressively, RAORL can effortlessly merge with existing model-free offline RL methods, further elevating policy performance. Empirical evaluations on widely recognized continuous control benchmarks underline its performance. In our studies, RAORL frequently outperformed leading methods, especially in complex tasks such as Adroit and AntMaz, demonstrating its effectiveness in offline RL applications. REFERENCES Rishabh Agarwal, Dale Schuurmans, and Mohammad Norouzi. An optimistic perspective on offline reinforcement learning. In *International Conference on Machine Learning*, pp. 104–114. PMLR, 2020. Ioannis Antonoglou, Julian Schrittwieser, Sherjil Ozair, Thomas K Hubert, and David Silver. Planning in stochastic environments with a learned model. In *International Conference on Learning Representations*, 2022. Arthur Argenson and Gabriel Dulac-Arnold. Model-based offline planning. In *International Conference on Learning Representations*, 2021. Kishan Panaganti Badrinath and Dileep Kalathil. Robust reinforcement learning using least squares policy iteration with provable performance guarantees. In *International Conference on Machine Learning*, pp. 511–520. PMLR, 2021. Philip J Ball, Cong Lu, Jack Parker-Holder, and Stephen Roberts. Augmented world models facilitate zero-shot dynamics generalization from a single offline environment. In *International Conference on Machine Learning*, pp. 619–629. PMLR, 2021. Ching-An Cheng, Tengyang Xie, Nan Jiang, and Alekh Agarwal. Adversarially trained actor critic for offline reinforcement learning. 2022. Julien Grand Clement and Christian Kroer. First-order methods for wasserstein distributionally robust mdp. In *International Conference on Machine Learning*, pp. 2010–2019. PMLR, 2021. Esther Derman, Matthieu Geist, and Shie Mannor. Twice regularized MDPs and the equivalence between robustness and regularization. *Advances in Neural Information Processing Systems*, 34, 2021. Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4rl: Datasets for deep data-driven reinforcement learning. *arXiv preprint arXiv:2004.07219*, 2020. Scott Fujimoto and Shixiang Shane Gu. A minimalist approach to offline reinforcement learning. *Advances in Neural Information Processing Systems*, 34, 2021. Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In *International conference on machine learning*, pp. 1587–1596. PMLR, 2018. Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. In *International conference on machine learning*, pp. 2052–2062. PMLR, 2019. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. *International Conference on Learning Representations*, 2015. Toru Hishinuma and Kei Senda. Weighted model estimation for offline model-based reinforcement learning. *Advances in Neural Information Processing Systems*, 34, 2021. Garud N Iyengar. Robust dynamic programming. *Mathematics of Operations Research*, 30(2): 257–280, 2005. Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, and Thorsten Joachims. Morel: Model-based offline reinforcement learning. *Advances in neural information processing systems*, 33: 21810–21823, 2020. Ilya Kostrikov, Rob Fergus, Jonathan Tompson, and Ofir Nachum. Offline reinforcement learning with fisher divergence critic regularization. In *International Conference on Machine Learning*, pp. 5774–5783. PMLR, 2021. Ilya Kostrikov, Ashvin Nair, and Sergey Levine. Offline reinforcement learning with implicit q-learning. In *International Conference on Learning Representations*, 2022.
rUH2EDpToF
As far as I understood, there's no guarantee that the model satisfies the marginalization self-consistency constraint (Eq. 5), and therefore the model compares to any other dealing with approximate marginal inference, such as [1] [2].
GENERATIVE MARGINALIZATION MODELS Anonymous authors Paper under double-blind review ABSTRACT We introduce marginalization models (MAMs), a new family of generative models for high-dimensional discrete data. They offer scalable and flexible generative modeling with tractable likelihoods by explicitly modeling all induced marginal distributions. Marginalization models enable fast evaluation of arbitrary marginal probabilities with a single forward pass of the neural network, which overcomes a major limitation of methods with exact marginal inference, such as autoregressive models (ARMs). We propose scalable methods for learning the marginals, grounded in the concept of “marginalization self-consistency”. Unlike previous methods, MAMs also support scalable training of any-order generative models for high-dimensional problems under the setting of energy-based training, where the goal is to match the learned distribution to a given desired probability (specified by an unnormalized (log) probability function such as energy or reward function). We demonstrate the effectiveness of the proposed model on a variety of discrete data distributions, including binary images, language, physical systems, and molecules, for maximum likelihood and energy-based training settings. MAMs achieve orders of magnitude speedup in evaluating the marginal probabilities on both settings. For energy-based training tasks, MAMs enable any-order generative modeling of high-dimensional problems beyond the capability of previous methods. 1 INTRODUCTION Deep generative models have enabled remarkable progress across diverse fields, including image generation, audio synthesis, natural language modeling, and scientific discovery. However, there remains a pressing need to better support efficient probabilistic inference for key questions involving marginal probabilities $p(x_s)$ and conditional probabilities $p(x_u | x_v)$, for appropriate subsets $s, u, v$ of the variables. The ability to directly address such quantities is critical in applications such as outlier detection [50, 40], masked language modeling [11, 72], image inpainting [73], and constrained protein/molecule design [69, 55]. Furthermore, the capacity to conduct such inferences for arbitrary subsets of variables empowers users to leverage the model according to their specific needs and preferences. For instance, in protein design, scientists may want to manually guide the generation of a protein from a user-defined substructure under a particular path over the relevant variables. This requires the generative model to perform arbitrary marginal inferences. Towards this end, neural autoregressive models (ARMs) [3, 30] have been developed to facilitate conditional/marginal inference based on the idea of modeling a high-dimensional joint distribution as a factorization of univariate conditionals using the chain rule of probability. Many efforts have been made to scale up ARMs and enable any-order generative modeling under the setting of maximum likelihood estimation (MLE) [30, 66, 20], and great progress has been made in applications such as masked language modeling [72] and image inpainting [20]. However, marginal likelihood evaluation in the most widely-used modern neural network architectures (e.g., Transformers [68] and U-Nets [53]) is limited by $\mathcal{O}(D)$ neural network passes, where $D$ is the length of the sequence. This scaling makes it difficult to evaluate likelihoods on long sequences arising in data such as natural language and proteins. In contrast to MLE, in the setting of energy-based training (EB), instead of empirical data samples, we only have access to an unnormalized (log) probability function (specified by a reward or energy function) that can be evaluated pointwise for the generative model to match. In such settings, ARMs are limited to fixed-order generative modeling and lack scalability in training. The subsampling techniques developed to scale the training of conditionals for MLE are no longer applicable when matching log probabilities in energy-based training (see Section 4.3 for details). Figure 1: Marginalization models (MAMs) enable estimation of any marginal probability with a neural network $\theta$ that learns to “marginalize out” variables. The figure illustrates marginalization of a single variable on bit strings (representing molecules) with two alternatives (versus $K$ in general) for clarity. The bars represent probability masses. To enhance scalability and flexibility in the generative modeling of discrete data, we propose a new family of generative models, marginalization models (MAMs), that directly model the marginal distribution $p(x_s)$ for any subset of variables $x_s$ in $x$. Direct access to marginals has two important advantages: 1) significantly speeding up inference for any marginal, and 2) enabling scalable training of any-order generative models under both MLE and EB settings. The unique structure of the model allows it to simultaneously represent the coupled collection of all marginal distributions of a given discrete joint probability mass function. For the model to be valid, it must be consistent with the sum rule of probability, a condition we refer to as “marginalization self-consistency” (see Figure 1); learning to enforce this with scalable training objectives is one of the key contributions of this work. We show that MAMs can be trained under both maximum likelihood and energy-based training settings with scalable learning objectives. We demonstrate the effectiveness of MAMs in both settings on a variety of discrete data distributions, including binary images, text, physical systems, and molecules. We empirically show that MAMs achieve orders of magnitude speedup in marginal likelihood evaluation. For energy-based training, any-order generative models to high-dimensional problems that previous methods fail to achieve. 2 BACKGROUND We first review two prevalent generative modeling settings. Then we introduce autoregressive models under two training settings. Maximum likelihood (MLE) Given a dataset $D = \{x^{(i)}\}_{i=1}^N$ drawn from a data distribution $p = p_{\text{data}}$, we aim to learn the distribution $p_\theta(x)$ that maximizes the probability of the data under our model. Mathematically, we aim to learn the parameters $\theta^\star$ that maximize the log-likelihood: $$\theta^\star = \arg\max_\theta \mathbb{E}_{x \sim p_{\text{data}}} [\log p_\theta(x)] \approx \arg\max_\theta \frac{1}{N} \sum_{i=1}^N \log p_\theta(x^{(i)})$$ which is also equivalent to minimizing the Kullback-Leibler divergence under the empirical distribution, i.e., minimizing $D_{\text{KL}}(p_{\text{data}}(x)||p_\theta(x))$. This is the setting that is most commonly used in generation of images (e.g., diffusion models [59, 18, 60]) and language (e.g. GPT [49]) where we can empirically draw observed data from the distribution. Energy-based training (EB) In this setting, we do not have data from the distribution of interest. Instead, we have access to the unnormalized (log) probability mass function $f$, usually in the form of reward function or energy function, that are defined by humans or by physical systems to specify how likely a sample is. Mathematically, we can define the target probability mass function to be $f(x) = \exp(r(x)/\tau)$, where $r(x)$ is the reward function and $\tau > 0$ is a temperature parameter. This expresses the intuitive idea that we would like the model to assign higher probability to data with larger reward. For example, the reward function can represent human preferences in alignment of large language models [43, 42]. In molecular/material design applications, scientists can specify the reward according to how close a particular sample’s measured or calculated properties are to some functional desiderata. When modeling the thermodynamic ensemble of physical systems, \( r(x) \) is defined to be the (negative) energy function of a given state [41]. Mathematically, we aim to learn the parameters \( \theta \) such that \( p_\theta(x) \approx f(x)/Z \), where \( Z \) is the normalization constant of \( f \). A common training criteria is to minimize the KL divergence [41, 71, 9]: \[ \min_\theta D_{KL} \left( p_\theta(x) \parallel f(x)/Z \right) = \mathbb{E}_{x \sim p_\theta(x)} \left[ \log p_\theta(x) - \log f(x)/Z \right]. \] **Autoregressive models** Autoregressive models (ARMs) [3, 30] model a complex high-dimensional distribution \( p(x) \) by factorizing it into univariate conditionals using the chain rule: \[ \log p(x) = \sum_{d=1}^{D} \log p(x_d | x_{<d}), \] where \( x_{<d} = \{x_1, \ldots, x_{d-1}\} \). Recently there has been great success in applying autoregressive models to discrete data, such as natural language, proteins [58, 32, 36], and molecules [56, 15]. Due to their sequential nature via modeling the conditionals, evaluation of (joint/marginal) likelihood requires up to \( D \) neural network evaluations. This is costly for long sequences, leading to limitations that prevent ARMs to be scalable for marginal inference and energy-based training. **Any-order ARMs (AO-ARMS)** Under the MLE setting, Uria et al. [66] propose to learn the conditionals of ARMs for arbitrary orderings that include all permutations of \( \{1, \ldots, D\} \). The model \( \phi \) can be trained by maximizing a lower-bound objective [66, 20] that takes an expectation under a uniform distribution on orderings. This objective allows scalable training of AO-ARMs, leveraging efficient parallel evaluation of multiple one-step conditionals for each token in one forward pass with architectures such as the U-Net [53] and Transformers [68]. However, under the EB setting, training AO-ARMs presents challenges, which we will discuss in details in Section 4.3. ### 3 MARGINALIZATION MODELS We propose **marginalization models (MAMs)**, a new type of generative model that enables scalable any-order generative modeling as well as efficient marginal evaluation, for both maximum likelihood and energy-based training. The flexibility and scalability of marginalization models are enabled by the explicit modeling of the marginal distribution and enforcing **marginalization self-consistency**. In this paper, we focus on generative modeling of discrete structures using vectors of discrete variables. The vector representation encompasses various real-world problems with discrete structures, including language sequence modeling, protein design, and molecules with string-based representations (e.g., SMILES [70] and SELFIES [29]). Moreover, vector representations are inherently applicable to any discrete problem, since it is feasible to encode any discrete object into a vector of discrete variables. **Definition** We are interested in modeling the discrete probability distribution \( p(x) \), where \( x = [x_1, \ldots, x_D] \) is a \( D \)-dimensional vector and each \( x_d \) takes \( K \) possible values, i.e. \( x_d \in \{1, \ldots, K\} \). **Marginalization** Let \( x_s \) be a subset of variables of \( x \) and \( x_{s^c} \) be the complement set, i.e. \( x_s \subseteq \{x_1, \ldots, x_D\} \) and \( x_{s^c} = \{x_1, \ldots, x_D\} \setminus x_s \). The marginal of \( x_s \) is obtained by summing over all values of \( x_{s^c} \): \[ p(x_s) = \sum_{x_{s^c}} p(x_s, x_{s^c}) \] We refer to (4) as the “marginalization self-consistency” that any valid distribution should follow. The goal of a marginalization model \( \theta \) is to estimate the marginals \( p(x_s) \) for any subset of variables \( x_s \) as closely as possible. To achieve this, we train a deep neural network \( p_\theta \) that minimizes the distance of \( p_\theta(x) \) and \( p(x) \) on the full joint distribution while enforcing the marginalization self-consistency. **Parameterization** To approximate arbitrary marginals over \( x_s \) with a single neural network forward pass, we additionally include the “marginalized out” variables \( x_{s^c} \) in the input by introducing a special symbol “?” to denote the missing values. By doing this, we create an augmented \( D \)-dimensional vector representation \( x_s^{\text{aug}} \in X^{\text{aug}} \triangleq \{1, \ldots, K, ?\}^D \) and feed it to the NN. For example, for a binary vector \( x \) of length 4, for \( x_s = \{x_1, x_3\} \) with \( x_1 = 0 \) and \( x_3 = 1 \), \( x_s^{\text{aug}} = [0, ?, 1, ?] \) where “?” denotes \( x_2 \) and \( x_4 \) being marginalized out. From here onwards we will use \( x_s^{\text{aug}} \) and \( x_s \) interchangeably. --- 1 An alternative is to consider minimizing distance over some marginal distribution of interest if we only cares about a specific marginal. Note this is impractical under the energy-based training setting, when the true marginal \( p(x_s) \) is intractable to evaluate in general. A marginalization model parameterized by a neural network \( \theta \) takes in the augmented vector representation \( x^{\text{aug}} \in \{1, \ldots, K, ?\}^D \), and outputs the marginal log probability \( f_\theta(x_s) = \log p_\theta(x_s) \) that satisfy the marginalization self-consistency constraints: \[ \sum_{x_{s'}} p_\theta([x_s, x_{s'}]) = p_\theta(x_s) \quad \forall x_s \in \{1, \ldots, K, ?\}^D \] where \([x_s, x_{s'}]\) denotes the concatenation of \(x_s\) and \(x_{s'}\). Given a random ordering of the variables \( \sigma \in S_D \) where \( S_D \) defines the set of all permutations of \(1, 2, \cdots, D\), let \( \sigma(d) \) denote the \(d\)-th element in \( \sigma \) and \( \sigma(< d) \) be the first \(d - 1\) elements in \( \sigma \). The marginalization can be imposed over one variable at a time, which leads to the following one-step marginalization constraints: \[ p_\theta(x_{\sigma(<d)}) = \sum_{x_{\sigma(d)}} p_\theta(x_{\sigma(\leq d)}), \quad \forall \sigma \in S_D, x \in \{1, \cdots, K\}^D, d \in [1 : D]. \] (5) **Sampling** Given the learned marginalization model, one can sample from the learned distribution by picking an arbitrary order \( \sigma \) and sampling one variable at a time. To evaluate the conditionals at each step of the generation, we can use the product rule of probability: \[ p_\theta(x_{\sigma(d)} | x_{\sigma(<d)}) = \frac{p_\theta(x_{\sigma(\leq d)})}{p_\theta(x_{\sigma(<d)})}. \] However, the above is not a valid conditional distribution if the marginalization in (5) is not strictly enforced, since it might not sum up exactly to one. Hence we use following normalized conditional: \[ p_\theta(x_{\sigma(d)} | x_{\sigma(<d)}) = \frac{p_\theta([x_{\sigma(<d)}, x_{\sigma(d)}])}{\sum_{x_{\sigma(d)}} p_\theta([x_{\sigma(<d)}, x_{\sigma(d)}])}. \] (6) In this paper, we focus on the sampling procedure that generates one variable at a time, but marginalization models can also facilitate sampling multiple variables at a time (See Appendix B.2). **Scalable learning of marginals with conditionals** In training, we impose the marginalization self-consistency by minimizing the squared error of the constraints in (5) in log-space. Evaluation of each marginalization constraint in (5) requires \(K\) NN forward passes, where \(K\) is the number of discrete values \(x_d\) can take. This makes training challenging to scale when \(K\) is large. To address this issue, we augment the marginalization models with learnable conditionals parameterized by \( \phi \). The marginalization constraints in (5) can be decomposed into \(K\) parallel marginalization constraints, which makes it highly scalable to subsample from for training: \[ p_\theta(x_{\sigma(<d)})p_\phi(x_{\sigma(d)} | x_{\sigma(<d)}) = p_\theta(x_{\sigma(\leq d)}), \quad \forall \sigma \in S_D, x \in \{1, \cdots, K\}^D, d \in [1 : D]. \] (7) During training, we need to specify a distribution \(q(x)\) for subsampling the marginalization constraints to optimize on. In practice, it can be set to the distribution we are interested to perform marginal inference on, such as \(p_{\text{data}}\) or the distribution of the generative model \(p_{\theta,\phi}\). ### 4 Training the Marginalization Models #### 4.1 Maximum Likelihood Estimation Training In this setting, we train MAMs with the maximum likelihood objective while additionally enforcing the marginalization constraints in Equation (5): \[ \max_{\theta, \phi} \mathbb{E}_{x \sim p_{\text{data}}} \log p_\theta(x) \] subject to \[ p_\theta(x_{\sigma(<d)})p_\phi(x_{\sigma(d)} | x_{\sigma(<d)}) = p_\theta(x_{\sigma(\leq d)}), \quad \forall \sigma \in S_D, x \in \{1, \cdots, K\}^D, d \in [1 : D]. \] (8) **Two-stage training** A typical way to solve the above optimization problem is to convert the constraints into a penalty term and optimize the penalized objective, but we empirically found the learning to be slow and unstable. Instead, we identify an alternative two-stage optimization formulation that is theoretically equivalent to Equation (8), but leads to more efficient training: **Claim 1.** Solving the optimization problem in (8) is equivalent to the following two-stage optimization procedure, under mild assumption about the neural networks used being universal approximators: **Stage 1:** \( \max_{\theta, \phi} \mathbb{E}_{x \sim p_{\text{data}}} \mathbb{E}_{\sigma \sim U(S_D)} \sum_{d=1}^{D-1} \log p_\phi(x_{\sigma(d)} | x_{\sigma(<d)}) \) **Stage 2:** \( \min_{\theta} \mathbb{E}_{x \sim q(x)} \mathbb{E}_{\sigma \sim U(S_D)} \mathbb{E}_{d \sim U(1, \cdots, D)} \left( \log[p_\theta(x_{\sigma(<d)})p_\phi(x_{\sigma(d)} | x_{\sigma(<d)})] - \log p_\theta(x_{\sigma(\leq d)}) \right)^2 \). To make sure \(p_\theta\) is normalized, we can either additionally enforce \(p_\theta([? ? \cdots ?]) = 1\) or let \(Z_\theta = p_\theta([? ? \cdots ?])\) be the normalization constant. The first stage can be interpreted as fitting the conditionals in the same way as AO-ARMs [66, 20] and the second stage acts as distilling the marginals from conditionals. The intuition comes from the chain rule of probability: there is a one-to-one correspondence between optimal conditionals $\phi$ and marginals $\theta$, i.e. $\log p_\theta(x) = \sum_{d=1}^{D} \log p_\phi(x_{\sigma(d)} | x_{\sigma(<d)})$ for any $\sigma$ and $x$. By assuming neural networks are universal approximators, we can first optimize for the optimal conditionals, and then optimize for the corresponding optimal marginals. We provide more details in Appendix A.1. ### 4.2 ENERGY-BASED TRAINING In this setting, we train MAMs using the energy-based training objective in Equation (2) with a penalty term to enforce the marginalization constraints in Equation (5): $$\min_{\theta,\phi} D_{KL}(p_\theta(x) \| p(x)) + \lambda \mathbb{E}_{x \sim q(x)} \mathbb{E}_\sigma \mathbb{E}_d (\log[p_\theta(x_{\sigma(d)} | x_{\sigma(<d)})] - \log p_\theta(x_{\sigma(d)}))^2,$$ where $\sigma \sim U(S_D)$, $d \sim U(1, \cdots, D)$ and $q(x)$ is the distribution of interest for evaluating marginals. **Scalable training** We use REINFORCE to estimate the gradient of the KL divergence term: $$\nabla_\theta D_{KL}(p_\theta(x) || p(x)) = \mathbb{E}_{x \sim p_\theta(x)} [\nabla_\theta \log p_\theta(x) (\log p_\theta(x) - \log f(x))]$$ $$\approx \frac{1}{N} \sum_{i=1}^{N} \nabla_\theta \log p_\theta(x^{(i)}) (\log p_\theta(x^{(i)}) - \log f(x^{(i)}))$$ For the penalty term, we subsample the ordering $\sigma$ and step $d$ for each data $x$. **Efficient sampling with persistent MCMC** We need cheap and effective samples from $p_\theta$ in order to perform REINFORCE, so a persistent set of Markov chains are maintained by randomly picking an ordering and taking block Gibbs sampling steps using the conditional distribution $p_\phi(x_{\sigma(d)} | x_{\sigma(<d)})$ (full algorithm in Appendix A.5), in similar fashion to persistent contrastive divergence [64]. The samples from the conditional distribution $p_\phi$ serve as approximate samples from $p_\theta$ when they are close to each other. Otherwise, we can additionally use importance sampling for adjustment. ### 4.3 ADDRESSING LIMITATIONS OF ARMs We discuss in more detail about how MAMs address some limitations of ARMs. The first one is general to both training settings, while the latter two are specific to energy-based training. 1) **Slow marginal inference of likelihoods** Due to sequential conditional modeling, evaluation of a marginal $p_\phi(x_\sigma)$ with ARMs (or an arbitrary marginal with AO-ARMs) requires applying the NN $\phi$ up to $D$ times, which is inefficient in time and memory for high-dimensional data. In comparison, MAMs are able to estimate any arbitrary marginal with one NN forward pass. 2) **Lack of support for any-order training** In energy-based training, the objective in Equation (2) aims to minimize the distance between $\log p_\phi(x)$ and $\log p(x)$, where $\phi$ is the NN parameters of an ARM. However, unless the ARM is perfectly self-consistent over all orderings, it will not be the case that $\log p_\phi(x) = \mathbb{E}_\sigma \log p_\phi(x | \sigma)$. Therefore, the expected $D_{KL}$ objective over the orderings $\sigma$ would not be equivalent to the original $D_{KL}$ objective, i.e., $$\mathbb{E}_{p_\phi} [\mathbb{E}_\sigma \log p_\phi(x | \sigma) - \log p(x)] \neq \mathbb{E}_{p_\phi} [\log p_\phi(x) - \log p(x)].$$ As a result, ARMs cannot be trained with the expected $D_{KL}$ objective over all orderings simultaneously, but instead need to resort to a preset order and minimize the KL divergence between $\log p_\phi(x | \sigma)$ and the target density $\log p(x)$. The self-consistency constraints imposed by MAMs address this issue. MAMs are not limited to fixed ordering because marginals are order-agnostic and we can optimize over expectation of orderings for the marginalization self-consistency constraints. 3) **Training not scalable on high-dimensional problems** When minimizing the difference between $\log p_\phi(x | \sigma)$ and the target $\log p(x)$, ARMs need to sum conditionals to evaluate $\log p_\phi(x | \sigma)$. One might consider subsampling one-step conditionals $p_\phi(x_{\sigma(d)} | x_{\sigma(<d)})$ to estimate $p_\phi(x)$, but this leads... to high variance of the REINFORCE gradient in Equation (9) due to the product of the score function and distance terms, which are both high variance (We validate this in experiments, see Figure 3). Consequently, training ARMs for energy-based training necessitates a sequence of $D$ conditional evaluations to compute the gradient of the objective function. This constraint leads to an effective batch size of $B \times D$ for batch of $B$ samples, significantly limiting the scalability of ARMs to high-dimensional problems. Furthermore, obtaining Monte Carlo samples from ARMs for the REINFORCE gradient estimator is slow when the dimension is high. Due to the fixed input ordering, this process requires $D$ sequential sampling steps, making more cost-effective sampling approaches like persistent MCMC infeasible. Marginalization models circumvent this challenge by directly estimating the log-likelihood with the marginal neural network. Additionally, the support for any-order training enables efficient sampling through the utilization of persistent MCMC methods. 5 RELATED WORK Autoregressive models Developments in deep learning have greatly advanced the performance of ARMs across different modalities, including images, audio, and text. Any-order (Order-agnostic) ARMs were first introduced in [66] by training with the any-order lower-bound objective for the maximum likelihood setting and recently seen in ARDM [20] with state-of-the-art performance for any-order discrete modeling of image/audio. Germain et al. [16] train an auto-encoder with masking that outputs the sequence of all one-step conditionals for a given ordering, but does not generate as well as methods [67, 72, 20] that predict one-step conditionals under the given masking. Douglas et al. [14] trains an AO-ARM and use importance sampling to estimate arbitrary conditional posteriors, but with limited experiment validation on a synthetic dataset. Shih et al. [57] utilizes a modified training objective of ARMs for better marginal inference performance but loses any-order generation capability. Comparisons of MAMs and ARMs are discussed in detail in Section 4.3. Arbitrary conditional/marginal models For continuous data, VAEAC [25] and ACFlow [31] extends the idea of conditional variational encoder and normalizing flow to model arbitrary conditionals. ACE [62] improves the expressiveness of arbitrary conditional models through directly modeling the energy function, which puts less constraints on parameterization but comes at the cost of approximating the normalizing constant. Instead of using neural networks as function approximators, probabilistic circuits (PCs) [6, 45] offer tractable probabilistic models for both conditionals and marginals by building a computation graph with sum and product operations following specific structural constraints. Examples of PCs include Chow-Liu trees [7], arithmetic circuits [10], sum-product networks [47], etc. Peharz et al. [45] have improved the scalability of PCs through combining arithmetic operations into a single monolithic einsum-operation and automatic differentiation. More recently, [33, 34] demonstrated the potential of PCs with distilling latent variables from trained deep generative models on continuous image data. However, their expressiveness are limited by the structural constraints. All methods mentioned above focus on MLE settings, except ARMs are explored in energy-based training of science problems [9, 71], but suffer in scaling when $D$ is large. GFlowNets GFlowNets [2, 4] formulate the problem of generation as matching the probability flow at terminal states to the target normalized density. Compared to ARMs, GFlowNets allow flexible modeling of the generation process by assuming learnable generation paths through a directed acyclic graph (DAG). The advantages of learnable generation paths come with the trade-off of sacrificing the flexibility of any-order generation and exact likelihood evaluation. Under fixed generation path, GFlowNets are reduced to fixed-order ARMs [74]. In Appendix A.3, we further identify the connections and differences between GFlowNets and AO-ARMS/MAMs. For discrete problems, Zhang et al. [75] train GFlowNets on the squared distance loss with the trajectory balance objective [38], which is less scalable for large $D$ (due to the same reason as ARMs in Section 4.3) and renders direct access to marginals unavailable. For the MLE setting, an energy function is additionally learned from data such that training is reduced to energy-based training. 6 EXPERIMENTS We conduct experiments with marginalization models (MAM) on both MLE and EB settings for discrete problems including binary images, text, molecules and physical systems. We consider the following baselines for comparison: Any-order ARM (AO-ARM) [20], ARM [30], GFlowNet [39, 75], Discrete Flow[65] and Probabilistic Circuit (PC) [45]. MAM, PC and Figure 4: An example of the data generated (with 100/400/700 pixels masked) for comparing the quality of likelihood estimate. Numbers below the images are LL estimates from MAM’s marginal network (left) and AO-ARM-E’s ensemble estimate (right). | Model | NLL (bpd) ↓ | Spearman’s ↑ | Pearson ↑ | Marg. inf. time (s) ↓ | |------------------------|-------------|--------------|-----------|----------------------| | AO-ARM-E-U-Net | 0.148 | 1.0 | 1.0 | 661.98 ± 0.49 | | AO-ARM-S-U-Net | 0.149 | 0.996 | 0.993 | 132.40 ± 0.03 | | GflowNet-MLP | 0.189 | – | – | – | | PC-Image (EiNets)\(^4\) | 0.187 | 0.716 | 0.752 | 0.015 ± 0.00 | | MAM-U-Net | 0.149 | 0.992 | 0.993 | 0.018 ± 0.00 | (AO-)ARM support arbitrary marginal inference. Discrete flow\(^3\) allows exact likelihood evaluation while GFlowNet needs to approximate the likelihood with sum using importance samples. For evaluating AO-ARM’s marginal inference, we can either use an ensemble model by averaging over several random orderings (AO-ARM-E) or use a single random ordering (AO-ARM-S). In general, AO-ARM-E should always be better than AO-ARM-S but at a much higher cost. Neural network architecture and training hyperparameter details can be found in Appendix C. Ablation studies on measuring marginal self-consistency and sampling with marginals are in Appendices B.1 and B.2. Guidance on picking \(q\) is in Appendix B.3. Appendix C.3 contains more results on CIFAR-10. 6.1 Maximum Likelihood Estimation Training **Binary MNIST** We report the negative test likelihood (bits/digit), marginal estimate quality and marginal inference time per minibatch (of size 16) in Table (1). To keep GPU memory usage the same, we sequentially evaluate the likelihood for ARMs. Both MAM and AO-ARM use a U-Net architecture with 4 ResNet Blocks interleaved with attention layers (see Appendix C). GFlowNets fail to scale to large architectures as U-Net, hence we report GFlowNet results using an MLP from Zhang et al. [75]. For MAM, we use the conditional network to evaluate test likelihood (since this is also how MAM generates data). The marginal network is used for evaluating marginal inference. The quality of the marginal estimates will be compared to the best performing model. In order to evaluate the quality of marginal likelihood estimates, we employ a controlled experiment where we randomly mask out portions of a test image and generate multiple samples with varying levels of masking (refer to Figure 4). This process allows us to obtain a set of distinct yet comparable samples, each associated with a different likelihood value. For each model, we evaluate the likelihood of the generated samples and compare that with AO-ARM-E’s estimate since it achieves the best likelihood on test data. We repeat this controlled experiment on a random set of test images. The mean Spearman’s and Pearson correlation are reported to measure the strength of correlation in marginal inference likelihoods between the given model and AO-ARM-E. MAM achieves close to 4 order of magnitude speed-up in marginal inference while at comparable quality to that from AO-ARM-S. PCs are also very fast in marginal inference but there remains a gap in terms of quality. Generated samples and additional marginal inference on partial images are in Appendix C. **Molecular sets (MOSES)** We test generative modeling of MAM on a benchmarking molecular dataset [46] refined from the ZINC database [61]. Same metrics are reported as Binary-MNIST. Likelihood quality is measured similarly but on random groups of test molecules instead of generated ones. The generated molecules from MAM and AO-ARM are comparable to standard state-of-the-art molecular generative models, such as CharRNN [56], JT-VAE [26], and LatentGAN [48] (see Appendix C), with additional controllability and flexibility in any-order generation. MAM supports \(^3\)Results are only reported on text8 for discrete flow since there is no public code implementation. \(^4\)We adopt the SOTA implementation of PCs from EiNets [45]. Results are reported on Binary MNIST using the image-tailored PC structure [47]. For text and molecular data, designing tailored PC structures that deliver competitive performance remains an open challenge. Table 2: Performance Comparison on Molecular Sets | Model | NLL (bpd) ↓ | Spearman’s ↑ | Pearson ↑ | Marg. inf. time (s) ↓ | |------------------------|-------------|--------------|-----------|----------------------| | AO-ARM-E-Transformer | **0.652** | 1.0 | 1.0 | 96.87 ± 0.04 | | AO-ARM-S-Transformer | **0.655** | 0.996 | 0.994 | 19.32 ± 0.01 | | MAM-Transformer | **0.655** | 0.998 | 0.995 | **0.006 ± 0.00** | Table 3: Performance Comparison on text8 | Model | NLL (bpc) ↓ | Spearman’s ↑ | Pearson ↑ | Marg. inf. time (s) ↓ | |------------------------|-------------|--------------|-----------|----------------------| | Discrete Flow (8 flows)| 1.23 | – | – | – | | AO-ARM-E-Transformer | **1.494** | 1.0 | 1.0 | 207.60 ± 0.33 | | AO-ARM-S-Transformer | 1.529 | 0.982 | 0.987 | 41.40 ± 0.01 | | MAM-Transformer | 1.529 | 0.937 | 0.945 | **0.005 ± 0.000** | Table 4: Performance Comparison on Ising model (10 × 10) | Model | NLL (bpd) ↓ | KL divergence ↓ | Marg. inf. time (s) ↓ | |------------------------|-------------|-----------------|----------------------| | ARM-Forward-Order-MLP | 0.79 | -78.63 | 5.29 ± 0.07e-01 | | ARM-MC-Forward-Order-MLP| 24.84 | -18.01 | 5.30 ± 0.07e-01 | | GFlowNet-Learned-Order-MLP| **0.78** | -78.17 | – | | MAM-Any-Order-MLP | 0.80 | -77.77 | **3.75 ± 0.08e-04** | Table 5: Performance Comparison on Target Lipophilicity | Distribution | KL divergence ↓ | |--------------|-----------------| | logP = 4, τ = 1.0 | logP = −4, τ = 1.0 | logP = 4, τ = 0.1 | logP = 4, τ = 0.1 | | ARM-FO-MLP | -174.25 | -168.62 | -167.83 | -160.2 | | MAM-AO-MLP | -173.07 | -166.43 | -165.75 | -157.59 | much faster marginal inference, which is useful for domain scientists to reason about likelihood of (sub)structures. Generated molecules and property histogram plots of are available in Appendix C. Text8 Text8 [37] is a widely used character level natural language modeling dataset. The dataset comprises of 100M characters from Wikipedia, split into chunks of 250 character. We follow the same testing procedure as Binary-MNIST and report the same metrics. The test NLL of discrete flow is from [65], for which there are no open-source implementations to evaluate additional metrics. 6.2 ENERGY-BASED TRAINING We compare with ARM that uses sum of conditionals to evaluate $\log p_\phi$ with fixed forward ordering and ARM-MC that uses a one-step conditional to estimate $\log p_\theta$. ARM can be regarded as the golden standard of learning autoregressive conditionals, since its gradient needs to be evaluated on the full generation trajectory, which is the most informative and costly. MAM uses marginal network to evaluate $\log p_\theta$ and subsamples a one-step marginalization constraint for each data point in the batch. The effective batch size for ARM and GFlowNet is $B \times O(D)$ for batch of size $B$, and $B \times O(1)$ for ARM-MC and MAM. MAM and ARM optimizes KL divergence using REINFORCE gradient estimator with baseline. GFlowNet is trained on per-sample gradient of squared distance [75]. Ising model Ising models [24] model interacting spins and are widely studied in mathematics and physics (see MacKay [35]). We study Ising model on a square lattice. The spins of the $D$ sites are represented a $D$-dimensional binary vector and its distribution is $p^*(x) \propto f^*(x) = \exp(-E_J(x))$ where $E_J(x) \triangleq -x^\top J x - \theta^\top x$, with $J$ the binary adjacency matrix. These models, although simplistic, bear analogies to the complex behavior of high-entropy alloys [9]. We compare MAM with ARM, ARM-MC, and GFlowNet on a $10 \times 10$ ($D=100$) and a larger $30 \times 30$ ($D=900$) Ising model where ARMs and GFlowNets fail to scale. 2000 ground truth samples are generated following Grathwohl et al. [17] and we measure test negative log-likelihood on those samples. We also measure $D_{KL}(p_\theta(x)||p^*)$ by sampling from the learned model and evaluating $\sum_{i=1}^{M} (\log p_\theta(x_i) - \log f^*(x_i))$. Figure 5 contains KDE plots of $-E_J(x)$ for the generated samples. As described in Section 4.3, the ARM-MC gradient suffers from high variance and fails to converge. It also tends to collapse and converge to a single sample. MAM has significant speedup in marginal inference and is the only model that supports any-order generative modeling. The performance in terms of KL divergence and likelihood are only slightly worse than models with fixed/learned order, which is expected since any-order modeling is harder than fixed-order modeling, and MAM is solving a more complicated task. of jointly learning conditionals and marginals. On a $30 \times 30$ ($D = 900$) Ising model, MAM achieves a bpd of 0.835 on ground-truth samples while ARM and GFlowNet fails to scale. Distribution of generated samples is shown in Figure 5. **Molecular generation with target property** In this task, we are interested in training generative models towards a specific target property of interest $g(x)$, such as lipophilicity (logP), synthetic accessibility (SA) etc. We define the distribution of molecules to follow $p^*(x) \propto \exp\left(-\frac{(g(x) - g^*)^2}{\tau}\right)$, where $g^*$ is the target value of the property and $\tau$ is a temperature parameter. We train ARM and MAM for lipophilicity of target values 4.0 and −4.0, both with $\tau = 1.0$ and $\tau = 0.1$. Both models are trained for 4000 iterations with batch size 512. Results are shown in Figure 6 and Table 5 (additional figures in Appendix C). Findings are consistent with the Ising model experiments. Again, MAM performs just marginally below ARM. However, only MAM supports any-order modeling and scales to high-dimensional problems. Figure 6 (right) shows molecular generation with MAM for $D = 500$. **7 CONCLUSION** In conclusion, marginalization models are a novel family of generative models for high-dimensional discrete data that offer scalable and flexible generative modeling with tractable likelihoods. These models explicitly model all induced marginal distributions, allowing for fast evaluation of arbitrary marginal probabilities with a single forward pass of the neural network. MAMs also support scalable training objectives for any-order generative modeling, which previous methods struggle to achieve under the energy-based training setting. Potential future work includes designing new neural network architectures that automatically satisfy the marginalization self-consistency. REFERENCES [1] Jacob Austin, Daniel D Johnson, Jonathan Ho, Daniel Tarlow, and Rianne van den Berg. Structured denoising diffusion models in discrete state-spaces. *Advances in Neural Information Processing Systems*, 34:17981–17993, 2021. [2] Emmanuel Bengio, Moksh Jain, Maksym Korablyov, Doina Precup, and Yoshua Bengio. Flow network based generative models for non-iterative diverse candidate generation. *Advances in Neural Information Processing Systems*, 34:27381–27394, 2021. [3] Samy Bengio and Yoshua Bengio. Taking on the curse of dimensionality in joint distributions using neural networks. *IEEE Transactions on Neural Networks*, 11(3):550–557, 2000. [4] Yoshua Bengio, Salem Lahlou, Tristan Deleu, Edward J. Hu, Mo Tiwari, and Emmanuel Bengio. Glownet foundations. *Journal of Machine Learning Research*, 24(210):1–55, 2023. [5] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. *arXiv preprint arXiv:1509.00519*, 2015. [6] Y Choi, Antonio Vergari, and Guy Van den Broeck. Probabilistic circuits: A unifying framework for tractable probabilistic models. *UCLA. URL: http://starai.cs.ucla.edu/papers/ProbCirc20.pdf*, 2020. [7] CKCN Chow and Cong Liu. Approximating discrete probability distributions with dependence trees. *IEEE transactions on Information Theory*, 14(3):462–467, 1968. [8] George Cybenko. Approximation by superpositions of a sigmoidal function. *Mathematics of control, signals and systems*, 2(4):303–314, 1989. [9] James Damewood, Daniel Schwalbe-Koda, and Rafael Gómez-Bombarelli. Sampling lattices in semi-grand canonical ensemble with autoregressive machine learning. *npj Computational Materials*, 8(1):61, 2022. [10] Adnan Darwiche. A differential approach to inference in Bayesian networks. *Journal of the ACM (JACM)*, 50(3):280–305, 2003. [11] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. [12] Laurent Dinh, David Krueger, and Yoshua Bengio. NICE: Non-linear independent components estimation. *arXiv preprint arXiv:1410.8516*, 2014. [13] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real NVP. *arXiv preprint arXiv:1605.08803*, 2016. [14] Laura Douglas, Iliyan Zarov, Konstantinos Gourgoulias, Chris Lucas, Chris Hart, Adam Baker, Maneesh Sahani, Yura Perov, and Saurabh Johri. A universal marginalizer for amortized inference in generative models. *Advances in Approximate Bayesian Inference, NIPS 2017 Workshop*, 2017. [15] Daniel Flam-Shepherd, Kevin Zhu, and Alán Aspuru-Guzik. Language models can learn complex molecular distributions. *Nature Communications*, 13(1):3293, 2022. [16] Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. Made: Masked autoencoder for distribution estimation. In *International conference on machine learning*, pp. 881–889. PMLR, 2015. [17] Will Grathwohl, Kevin Swersky, Milad Hashemi, David Duvenaud, and Chris Maddison. Oops I took a gradient: Scalable sampling for discrete distributions. In *International Conference on Machine Learning*, pp. 3831–3841. PMLR, 2021. [18] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. *Advances in Neural Information Processing Systems*, 33:6840–6851, 2020. [19] Emiel Hoogeboom, Jorn Peters, Rianne Van Den Berg, and Max Welling. Integer discrete flows and lossless compression. *Advances in Neural Information Processing Systems*, 32, 2019.
wXpSidPpc5
Not much details is provided in the main text regarding how we train such a beast. I must say this looks quite daunting to me how I would train a NODE along my transformer model. I guess it would help to have some explanations to it.
CLEX: Continuous Length Extrapolation for Large Language Models Guanzheng Chen\textsuperscript{1,2,3,*} Xin Li\textsuperscript{2,3,†} Zaiqiao Meng\textsuperscript{4} Shangsong Liang\textsuperscript{1,5,†} Lidong Bing\textsuperscript{2,3} \textsuperscript{1}Sun Yat-sen University \textsuperscript{2}DAMO Academy, Alibaba Group \textsuperscript{3}Hupan Lab, 310023, Hangzhou, China \textsuperscript{4}University of Glasgow \textsuperscript{5}Mohamed bin Zayed University of Artificial Intelligence guanzh.chen@gmail.com, {xinting.lx,l.bing}@alibaba-inc.com zaiqiao.meng@glasgow.ac.uk, liangshangsong@gmail.com Abstract Transformer-based Large Language Models (LLMs) are pioneering advances in many natural language processing tasks, however, their exceptional capabilities are restricted within the preset context window of Transformer. Position Embedding (PE) scaling methods, while effective in extending the context window to a specific length, demonstrate either notable limitations in their extrapolation abilities or sacrificing partial performance within the context window. Length extrapolation methods, although theoretically capable of extending the context window beyond the training sequence length, often underperform in practical long-context applications. To address these challenges, we propose Continuous Length EXtrapolation (CLEX) for LLMs. We generalise the PE scaling approaches to model the continuous dynamics by ordinary differential equations over the length scaling factor, thereby overcoming the constraints of current PE scaling methods designed for specific lengths. Moreover, by extending the dynamics to desired context lengths beyond the training sequence length, CLEX facilitates the length extrapolation with impressive performance in practical tasks. We demonstrate that CLEX can be seamlessly incorporated into LLMs equipped with Rotary Position Embedding, such as LLaMA and GPT-NeoX, with negligible impact on training and inference latency. Experimental results reveal that CLEX can effectively extend the context window to over $4\times$ or almost $8\times$ training length, with no deterioration in performance. Furthermore, when evaluated on the practical LongBench benchmark, our model trained on a 4k length exhibits competitive performance against state-of-the-art open-source models trained on context lengths up to 32k. Our code is available at \url{https://github.com/DAMO-NLP-SG/CLEX}. 1 Introduction Transformer-based large language models (LLMs), such as GPT-4 \cite{OpenAI2023} and LLaMA \cite{Touvron2023a,Touvron2023b}, have now emerged as the state-of-the-art models in various natural language processing (NLP) tasks. However, these models grapple with the limitations inherent to the Transformer architecture - mainly, a preset context window, beyond which performance plummets catastrophically \cite{Press2022}. The quadratic complexity of the attention mechanism renders training LLMs with a larger context window extraordinarily resource-intensive. Prior works \cite{Dai2019,Beltagy2020,Bulatov2022} have proposed circumventing full context length access via hierarchical architecture or sparse attention, albeit at the expense of forfeiting partial context information. Recently, there have been two lines of methods aimed at efficiently extending the pre-trained context length of LLMs, both centred on position embedding (PE). The first line of methods, known as PE scaling, are proposed to effectively extend the context window of LLMs integrated with Rotary *This work was done during the internship of Guanzheng Chen at Alibaba DAMO Academy. †Corresponding authors. Position Embedding (RoPE) \cite{su2022rope}. They allow LLMs to access longer context by scaling either position indices \cite{chen2023scaling} or frequency basis \cite{roziere2023scaling, peng2023scaling} of RoPE, demonstrating remarkable performance in long-context applications. However, such methods are designed for extending the context length corresponding to a fixed scaling factor, which either restricts their ability to extrapolate to longer sequences (when using small factors) or impairs the performance even within the native context window (when using large factors) as shown in Figure 1. On the other hand, length extrapolation methods \cite{press2022alibi, sun2023scaling, chi2022scaling, chi2023scaling}, typified by ALiBi \cite{press2022alibi}, strive to achieve test-time context length extension (i.e., "training on short, testing on long") by substituting position embeddings with additional biases, where the biases encode positional information to the attention scores. Despite their impressive capability in language modelling, ALiBi-like methods usually struggle in the practical tasks requiring long-context dependency \cite{pal2023scaling} (also see §4.3). In this work, we present Continuous Length Extrapolation (CLEX), a novel approach that efficiently extrapolates the context window of LLMs through continuous PE scaling. Concretely, we propose a unified view of PE scaling via generalising the PE scaling methods to the transition of frequency basis. Upon it, we formulate the PE scaling as a continuous dynamical system, which models the transition of frequency basis through the continuous dynamics over the length scaling factor. We argue that previous PE scaling methods, training models using fixed (discrete) scaling factors, overlook the progressively continuous dynamics over the gradually length-extending process. This ensnares themselves in the aforementioned dilemma between extrapolating the length and preserving the performance within shorter lengths. In contrast, our CLEX exploits a neural ordinary differential equation (ODE) \cite{chen2018neural}, parameterised by an up-and-down projection layer with lightweight parameters to learn these continuous dynamics, enabling fine-grained extending to long context. More essentially, by extending the dynamics beyond training length, CLEX empowers models to progressively extrapolate to longer contexts even when trained with short sequences. CLEX can serve as a drop-in component for RoPE-based LLMs, such as LLaMA \cite{touvron2023llama} and GPT-NeoX \cite{black2022gpt}, with negligible overhead in computation and parameters size. We evaluate the performance of CLEX on two datasets: (1) a subset of RedPajama-Book \cite{computer2023redpajama} for long-context language modelling, and (2) LongBench \cite{bai2023longbench} for long-context practical tasks. Empirically, CLEX demonstrates remarkable length extrapolation ability in language modelling, which can extend the context window to more than $4\times$ training length without any performance deterioration. For example, LLaMA-2-7B trained with CLEX on 16k context length achieves comparable perplexities when testing on 16k and 64k tokens, respectively. By scaling the base model scale from 7B to 13B, CLEX exhibits an expanded extrapolation scope from $4\times$ to almost $8\times$ training length. To be complementary, we also conduct instruction tuning \cite{wei2022scaling} with the proposed CLEX on the sequences of 4k length. The resulting model, when evaluated on the LongBench benchmark, is on par with current state-of-the-art open-source models trained on context lengths up to 32k. These findings underscore the effectiveness of CLEX in extrapolating context length, signifying its efficiency for developing long-context LLMs. 2 PRELIMINARIES 2.1 ROTARY POSITION EMBEDDING (RoPE) Rotary Position Embedding (RoPE) \cite{su2022rotary} has recently emerged as the most prevailing positional encoding method in open-source LLMs like LLaMA. It integrates both absolute and relative positional information for Transformer models. Given a position index \( m \in [1, L] \), RoPE injects the absolute positional information into \( x \in \mathbb{R}^d \) via the transformation \( f : \mathbb{R}^d \rightarrow \mathbb{R}^d \) as: \[ f(x, m, \theta) = R_{\theta,m}x, \] where \( \theta \in \mathbb{R}^{[d/2]} \) is the rotation frequency basis and \( \theta_i = 10,000^{-2i/d} \); \( R_{\theta,m} \in \mathbb{R}^{d \times d} \) is a block diagonal matrix formed by the elements \[ (R_{\theta,m})_i = \begin{bmatrix} \cos m\theta_i & -\sin m\theta_i \\ \sin m\theta_i & \cos m\theta_i \end{bmatrix}, \quad \text{for } i = 1, 2, ..., \lfloor d/2 \rfloor. \] The transformation in Eq. (1) is applied to the query and key vectors during self-attention. When calculating the attention score for the query vector \( q_m \in \mathbb{R}^d \) at position \( m \) and the key vector \( k_n \in \mathbb{R}^d \) at position \( n \), we have \[ (R_{\theta,m}q_m)^T(R_{\theta,n}k_n) = q_mR_{\theta,n-m}k_n. \] Hence, the relative positional information \( R_{\theta,n-m} \) is implicitly incorporated into the attention scores. However, even given the relative information, LLMs trained with RoPE, e.g., LLaMA, still cannot achieve reasonable performance beyond the pre-trained context length. 2.2 PE SCALING METHODS To extend the context length \( L \), several strategies are proposed to adjust the position embedding by scaling either the position index \( m \) or frequency basis \( \theta \) in Eq. (1). Formally, we define \( t = L'/L \) as the length scaling factor where \( L' \) denotes the desired extended length. While \cite{chen2023scaling} introduces scaling the index \( m \) by Position Interpolation (PI) as \[ f_{t}^{\text{PI}}(x, m, \theta) = f(x, \frac{m}{t}, \theta). \] This strategy maintains the position indices within the range \([1, L]\), while effectively extending the processed range to \([1, t \cdot L]\) by minimal fine-tuning steps on \( t \cdot L \) sequences. On the other hand, \cite{peng2023yarn} proposes Yarn, a.k.a. NTK-Aware Scaled RoPE, extends the context window by frequency basis scaling (FBS). This strategy is similarly utilised by CodeLLaMA \cite{roziere2023code}. Formally, the FBS methods are denoted as \[ f_{t}^{\text{FBS}}(x, m, \theta) = f(x, m, \theta_t), \] where \( \theta_t \) is the scaled frequency basis. Specifically, \( \theta_{t,i} = \theta_i \cdot (t)^{-2i/(d-2)} \) in Yarn and \( \theta_{t,i} = \theta_i \cdot 100^{-2i/d} \) in CodeLLaMA. 3 METHODOLOGY This section demonstrates the details of CLEX. We first generalise the PE scaling to a continuous dynamical system in a unified manner (see §3.1). On top of the continuous dynamical system, CLEX employs the neural ODE, parameterised by an up-and-down projection layer, to adaptively learn the continuous dynamics during PE scaling (see §3.2). In §3.3, we introduce the training strategy of CLEX that distributes the continuous dynamics beyond the training sequence length, thereby enabling the generalisation of continuous PE scaling to achieve the length extrapolation. 3.1 POSITION EMBEDDING SCALING: A UNIFIED VIEW Given the various methods that extend models’ context length through position indices scaling and frequency basis scaling, we first show that the transformations applied to position indices are essentially casting the frequency basis, which is formalised in Theorem 1. Figure 2: The graphical model of discrete PE scaling (left) and our continuous PE scaling (right). **Theorem 1.** For the transformation \( T \) to position index \( m \), there exists an equivalent transformation \( T \) to frequency basis \( \theta \) in Eq. (4), namely \[ f(x, T \cdot m, \theta) = f(x, m, T \odot \theta), \] where \( T = [T]_{i=1}^{d/2} \) and \( \odot \) denotes the element-wise transformation. **Proof.** From Eq. (1), we have \( f(x, T \cdot m, \theta) = R_{\theta, Tm} x \) and \( f(x, m, T \odot \theta) = R_{T \odot \theta, m} x \). For any \( T = [T]_{i=1}^{d/2} \), \[ (R_{\theta, Tm})_i = \begin{bmatrix} \cos Tm\theta_i & -\sin Tm\theta_i \\ \sin Tm\theta_i & \cos Tm\theta_i \end{bmatrix} = \begin{bmatrix} \cos m(T \odot \theta_i) & -\sin m(T \odot \theta_i) \\ \sin m(T \odot \theta_i) & \cos m(T \odot \theta_i) \end{bmatrix} = (R_{T \odot \theta, m})_i. \] Hence, there is a unified form for PE scaling that consistently projects the frequency basis by \( \alpha(t) \): \[ f_t(x, m, \theta) = f(x, m, \alpha(t) \odot \theta), \] where \( \alpha(t) \) is a single-variable transformation defined over the length scaling factor \( t \). Through this unified formulation, PI (Chen et al., 2023) and Yarn (Peng et al., 2023) can be viewed as the special cases when plugging \( \alpha^{\text{PI}}(t) = [1/t]_{i=1}^{d/2} \) and \( \alpha^{\text{Yarn}}(t) = [t^{-2i/(d-2)}]_{i=1}^{d/2} \) into Eq. (8), respectively. Note that \( \theta_t = \alpha(t) \odot \theta \) denotes the scaled frequency basis at context length of \( t \cdot L \) and \( \theta_1 = \theta \) (namely \( \alpha(1) = 1 \)). As illustrated in Figure 2, this indicates a progressive chain across discrete \( t \) values that \[ z(t) = z(1) + \log \alpha(t) = z(t-1) + \log \frac{\alpha(t)}{\alpha(t-1)}, \] where \( z(t) = \log \theta_t \). By continuizing the progressive chain, we can formulate the PE scaling as a continuous dynamical system, with the continuous dynamics of frequency basis \( dz(t)/dt \) as \[ \frac{dz(t)}{dt} = \frac{d \log \alpha(t)}{dt}. \] In essence, recent PE scaling methods, concentrating on manually formulating the \( \alpha(t) \), are equivalent to applying various dynamics for frequency basis that enable models to adapt to longer contexts. ### 3.2 Continuous PE Scaling via Neural ODE Even given the continuous dynamics of frequency basis, previous methods are inherently designed for extending the context length at discrete \( t \) values. For example, PI (Chen et al., 2023) fine-tunes the model on a specific scaling factor \( t \) to extend the context window length to \( t \cdot L \). One potential issue of these methods, as depicted in Figure 1, is that they are susceptible to overfitting to the specified frequency basis, leading to either poor extrapolation ability to longer lengths beyond training or performance drops within short lengths, or both in some cases. Therefore, our CLEX aims to build a continuous PE scaling, which induces the model to adapt the frequency basis corresponding to a continuous scope of \( t \) as illustrated in Figure 2 (right). Recall that previous PE scaling, corresponding to a manually defined $\alpha(t)$, implies the constant dynamics in Eq. (10). In our method, we utilise a variable function $g: \mathbb{R}^{d/2} \rightarrow \mathbb{R}^{d/2}$ to model the dynamics, hence towards a more general and flexible view as: $$\frac{dz(t)}{dt} = g(z(t), t).$$ (11) By restricting the function to be associated with the latent states $z(t)$, $g$ is capable of capturing the fine-grained changes of frequency basis during the length-extending process. However, it is non-trivial to manually define the $z(t)$-aware function $g$. Here, we directly parameterise the function using the neural network $\phi$. Therefore, for any $t' \in [1, t]$, there is a neural ODE modelling the scaling of frequency basis as $$z(t') = z(1) + \int_1^{t'} g_\phi(z(t), t) dt,$$ (12) where the frequency basis at the length $t' \cdot L$ can be derived by $\theta_{t'} = \exp(z(t'))$. More specifically, we adopt an up-and-down projection as the neural network, expressed as: $$g_\phi(z(t), t) = W_{\text{down}} \cdot \sigma(W_{\text{up}} \cdot z(t)) + \xi_t,$$ (13) where $W_{\text{up}} \in \mathbb{R}^{\frac{d}{2} \times \lambda d}$ and $W_{\text{down}} \in \mathbb{R}^{\lambda d \times \frac{d}{2}}$ are the transformation matrices, of which the parameters are determined by the amplification factor $\lambda$; $\sigma$ is the SiLU activation function and $\xi_t$ is the scalar embedding typifying the scaling procedure at factor of $t$. Here, we adopt the constant dynamics of Yarn as the $\xi_t$ for speeding up convergence, namely $$\xi_t = \frac{d \log \alpha_{\text{Yarn}}(t)}{dt} = -\left[\frac{2i}{(d-2) \cdot t}\right]_{i=1}^{d/2}$$ (14) ### 3.3 Continuous Length Extrapolation: Train on Short, Test on Long Continuous PE scaling can serve as a more adaptive and flexible PE scaling method to extend the context length to a given training length $L_{\text{Train}}$. Unlike the previous PE scaling methods built on a larger scaling factor, which would lead to inferior performance on the lengths corresponding to smaller counterparts, the continuous PE scaling would enable non-destructively generalisation to larger scaling factors via adaptive continuous dynamics. Therefore, by simply extending the continuous dynamics beyond the factor $t = L_{\text{Train}} / L$ during training (where we denote the desired scaling factor as $t_{\text{Train}}$), we can access the continuous length extrapolation (CLEX) method, which achieves the capability of “training on short, testing on long”. Moreover, to learn the neural ODE in Eq. (12) for continuous $t$, we randomly sample $t' \in [1, t_{\text{Train}}]$ for each training step, enabling the model to adapt to the broad scope frequency basis without overfitting a specific one. Note that the frequency basis is bound with the position index in Eq. (1). This reveals the aforementioned training involves inconsistency between the frequency basis and position indices: the frequency basis is varied corresponding to the $t' \in [1, t_{\text{Train}}]$, while the position indices are fixed as $\{1, 2, \ldots, L_{\text{Train}}\}$. Here, we propose the position extrapolation strategy to address this consistency. Contrary to PI, which shrinks the position indices into the context length, we enlarge the position indices $\{1, 2, \ldots, L_{\text{Train}}\}$ of the trained sequences up to the range $[1, t' \cdot L]$ for each training step. The position indices can be acquired by uniformly scaling to $\{1-s, 2-s, \ldots, L_{\text{Train}}, s\}$ where $s = t' \cdot L / L_{\text{Train}}$, or alternatively, by randomly sampling $L_{\text{Train}}$ of indices from $[1, t' \cdot L]$. Empirically, we found that random sampling generally performs better. More discussions can be found in §4.2. During inference, the ideal scenario is to acquire the frequency basis corresponding to each sequence length. However, this approach is computationally demanding. To improve efficiency, we first cache some frequency basis derived from $g_\phi$ for $K$ discrete $t$ values as $\{t_k | k \in [1, K]\}$. For each sequence with a length of $L_{\text{Infer}}$ during inference, we employ the frequency basis corresponding to the nearest upper bound within $t_k \cdot L$ for $k = 1, \ldots, K$. Through this, our method introduces negligible time cost compared to naive inference of LLMs. ### 4 EXPERIMENTS In this section, we conduct a thorough evaluation of CLEX’s performance in terms of handling long contexts and its extrapolation capabilities. We compare our approach against other methods covering both length extrapolation (i.e., ALiBi (Press et al., 2022) and random positions (denoted as RandomPos) (Ruoss et al., 2023)) and PE scaling methods (i.e., PI (Chen et al., 2023) and Yarn (Peng et al., 2023)). We primarily conduct experiments on the LLaMA-2-7B model. For the language modelling, we train our model and the baselines on 2B tokens extracted from Redpajama-Book (Computer, 2023), which is collected from Pile-Books3 (Gao et al., 2020) and PG-19 (Rae et al., 2019) datasets. The performance of the models is assessed based on perplexity and next-token-prediction accuracy, with evaluation sequence lengths up to 64k. Furthermore, we conduct instruction tuning for LLaMA-2-7B using CLEX on the UltraChat dataset (Ding et al., 2023b). The evaluation is performed on the LongBench benchmark (Bai et al., 2023), where we compare our model with GPT-3.5-turbo and other LLaMA-2-based open-source models designed for handling long context. Further details about baselines and training configuration will be discussed in Appx. §A, as well as more experimental results and ablations in Appx. §B. ### 4.1 Long-Context Language Modelling **CLEX achieves length extrapolation.** We first report the experimental results of baselines and CLEX on language modelling, with the evaluation length from 4k to 64k. As shown in Table 1, our CLEX consistently demonstrates remarkable performance in length extrapolation, being able to extrapolate the context length to more than $4 \times$ training length without any performance drops. Taking CLEX-4k as an example, its PPL on 4k sequence (training length) is comparable to that on 16k sequence (5.86 vs. 5.87). When evaluated on the sequences within 16k, CLEX-4k is on par with or even better than all of the compared methods trained on lengths up to 16k. Moreover, with the increase in training length, our CLEX not only exhibits promising generalisation capability to very long contexts (up to 64k) but also guarantees performance on short sequences. We also found that discrete PE scaling methods (i.e., PI and Yarn) have self-extending property: training with scaled frequency basis equips the model with the ability to extrapolate to further-scaled counterparts (see Appx. §B.2 for more discussions.). As depicted in Figure 1, however, the extrapolation capability of these methods is limited, accompanied by a significant performance decline even within the naive context length. This indicates the inherent challenge of achieving a delicate balance between extrapolation to longer lengths and performance maintenance within short lengths when using the discrete scaling factor. In contrast, CLEX tackles this issue via learnable continuous dynamics, providing a more fine-grained extrapolation while preserving the performance for the internal context. Note that ALiBi may extend further than CLEX trained on 4k sequences (though typically producing inferior results), our experiments reveal that these gains may come at the cost of long-term information, leading to underperformance in long-context practical tasks (see §4.3 for more details). | Methods | Train Length | Evaluation Length | |---------|--------------|-------------------| | | | 4096 (4k) | 8192 (8k) | 16,384 (16k) | 32,768 (32k) | 65,536 (64k) | | | | PPL | ACC. | PPL | ACC. | PPL | ACC. | PPL | ACC. | PPL | ACC. | | LLaMA-2 | 4k | 6.04 | 58.18 | 20.54 | 44.50 | >100 | 22.43 | >1000 | 12.70 | >1000 | 10.64 | | CodeLLaMA | 16k | 7.60 | 54.88 | 7.40 | 55.19 | 7.33 | 55.30 | 15.12 | 44.70 | 52.02 | 31.16 | | Naive FT | 16k | 5.98 | 58.83 | 5.93 | 58.91 | 5.91 | 58.58 | 18.31 | 43.04 | >100 | 26.05 | | PI | 16k | 5.90 | 59.05 | 5.71 | 59.44 | 5.72 | 59.87 | 6.05 | 58.5 | 8.75 | 52.02 | | Yarn ($t=16$) | 16k | 6.50 | 57.28 | 5.71 | 59.57 | 5.73 | 59.87 | 5.99 | 58.13 | 8.51 | 52.62 | | Yarn ($t=32$) | 16k | 6.61 | 57.12 | 5.94 | 58.27 | 5.96 | 58.04 | 6.08 | 57.73 | 6.22 | 57.98 | | CL-Scaling | 16k | 24.99 | 37.84 | 5.86 | 59.08 | 5.87 | 59.05 | 10.56 | 50.47 | 41.09 | 34.16 | | ALiBi | 4k | 6.34 | 58.01 | 6.39 | 57.8 | 6.41 | 57.78 | 6.50 | 57.47 | 6.51 | 56.44 | | RandomPos | 4k | 5.88 | 58.49 | >100 | 34.23 | >1000 | 18.27 | >1000 | 9.31 | >1000 | 7.44 | | CLEX | 4k | **5.86** | **59.21** | 5.70 | 59.62 | 5.87 | 58.93 | 14.53 | 47.55 | 30.51 | 35.33 | | | 8k | 5.98 | 58.75 | 5.78 | 59.44 | 5.71 | 59.64 | 5.99 | 58.66 | 11.74 | 47.50 | | | 16k | 5.88 | 59.21 | **5.68** | **59.73** | **5.52** | **60.28** | **5.55** | **60.06** | **5.64** | **59.94** | Table 1: Perplexity (PPL) and next-token-prediction accuracy (ACC.) on language modeling with evaluation lengths from 4k to 64k. We train the LLaMA-2-7B using length extrapolation methods on 4k length and PE scaling methods on 16k length, while reporting the results of CLEX trained across 4k, 8k and 16k. CL-Scaling denotes training LLaMA-2-7B with the scaling method of CodeLLaMA but using our training data. The training loss curves are depicted in Figure 9. Figure 3: Left: The PPLs of CLEX on different evaluation sequence lengths with 7B and 13B parameter sizes. Right: The PPLs of CLEX cross variable training data size with different parameter sizes and evaluation lengths. Figure 4: The ablation studies for continuous dynamics, sampling strategies and $\lambda$ factor in Eq. (13). The scaling law for the extrapolation ability of CLEX. To investigate the effectiveness of CLEX over the scale of the base model and training data size, we further port our method to LLaMA-2-13B. As depicted in Figure 3, when trivially extending the base model scale from 7B to 13B, our CLEX demonstrates an increased capacity to extrapolate to longer context lengths. Specifically, the extrapolation ability of CLEX-13B trained on 4k length approaches that of CLEX-7B trained on 8k. While the training data scale, more surprisingly, does not significantly impact the extrapolation capability of CLEX. Models trained with 0.25B or 2B tokens with 4k sequence length achieve comparable PPLs when evaluating on 16k or 32k lengths in Figure 3 indicating the negligible margins from the larger training data size. This also implies that CLEX can efficiently extend the context length of LLMs through minimal training steps resembling PI and Yarn. Based on these findings, we propose a scaling law for CLEX: to scale the context length of LLMs to moderately desired lengths (e.g., 16k → 64k), one should proportionally enlarge the training sequence lengths (e.g., 4k → 16k). For scaling the context length up to considerably long lengths (e.g., >200k), the parameter size of the base model should be correspondingly increased while maintaining the training length, since the contexts may take more footprints than model parameters. Note that scaling the training data does not directly affect the extrapolation ability of CLEX, but may be implicitly incorporated when scaling the base pre-trained LLMs. 4.2 Ablation Study We now conduct three types of ablations to investigate the efficacy of the components in CLEX: Continuous dynamics. To learn the continuous dynamics using neural ODE, we adopt a distinct training approach that involves sampling the scaling factor $t$ for each data batch. Here we seek to explore if the exceptional extrapolation ability of CLEX is solely derived from the variable $t$ rather than the continuous dynamics. We employ the discrete Yarn method with $t = 16$, that undergoes the same training procedure of CLEX but removes the ODE parameters, serving as a discrete baseline. In Figure 4(left), we discover that the discrete approach equipped with the random-sampled $t$ significantly underperforms our CLEX, indicating the essentiality of the learnable continuous dynamics in CLEX for accessing the extrapolation ability. Position extrapolation. We adopt the position extrapolation strategy, which extends the scope of position indices in training sequences by sampling from a broader range, to reconcile the inconsistency between frequency basis and position indices during the training process. In this study, we examine the impact of various sampling strategies (uniform or random) and contrast them with the naive position indices. The results in Figure 4 underscore the efficacy of position extrapolation in CLEX, without which the extrapolation ability of models declines significantly. Furthermore, random sampling slightly performs better than uniform sampling, so we adopt it across all experiments. The parameter scale of ODE. We also study the impact of parameter size of the neural ODE in CLEX. The parameter size is determined by the $\lambda$, namely the amplification factor in Eq. (13). In Figure 4, we plot the results of CLEX with $\lambda = 1, 2, 4$, where they achieve similar performance. Note that the parameter size of neural ODE in CLEX is quite small even when $\lambda = 4$, as the dimension $d$ in Eq. (13) is usually equal to 128. Although it is possible to enhance CLEX with larger $\lambda$ (e.g., 32), we set the $\lambda=1$ in all experiments for the minimal effect on inference latency. 4.3 Evaluation on Long-Context Benchmark To ascertain the comprehensive performance of CLEX in real-world scenarios, we further conduct an evaluation on the zero-shot LongBench benchmark. This benchmark encompasses a broad range of tasks, such as question-answering, summarization, and code completion, where the evaluation length ranges from 5k to 15k. We perform a pilot instruction tuning for LLaMA-2-7B by employing CLEX on the UltraChat dataset, with a sequence length of 4k. During inference, we harness all models to tackle the context length of 16k, thereby ensuring the comprehensive exploitation of contextual information in the tasks. As depicted in Figure 5, we present the average scores of each domain in LongBench for CLEX, in comparison to the GPT-3.5-Turbo-16k model and strong open-source LLMs like LongChat-v1.5-7B-32k and CodeLLaMA-7B-16k. Generally, when trained with sequences of 4k length, CLEX holds its own against any open-source LLMs that are trained on lengths up to 32k. In the specific domains of Summarization, Few-shot Learning, and Code Completion, CLEX on LLaMA-2-7B remains competitive with or even surpasses the GPT-3.5-Turbo-16k. We note that the Baichuan-13B-4k, pre-trained with ALiBi (Press et al., 2022), demonstrates marked underperformance on the LongBench although with a larger parameter size. Additionally, similar poor results are achieved by ALiBi when applying it upon LLaMA-2-7B using the same training procedure as CLEX (see Appx. B.5). This could likely be attributed to ALiBi’s overemphasis on local context through the attention bias, which, while advantageous for language modelling, restricts access to long-context information in practical tasks. In contrast, CLEX directly extends the context length of LLMs without imposing any constraints on context, which consistently achieves superior extrapolation ability on both language modelling and the LongBench. This substantiates the considerable potential of CLEX to serve as the state-of-the-art approach for extrapolating the context length of LLMs to excel in long-context applications. In addition, we highlight that our CLEX merely introduces minuscule inference latency. Given a context length of 16k in LongBench with a generation length of 512, the generation throughput between our CLEX-7B and LLaMA-2-7B is comparable (27.8 tokens/s vs 28.3 tokens/s, in a single A100), when using the cache mechanism introduced in §3.3. 5 RELATED WORK Hierarchical Architecture / Sparse Attention. To overcome the quadratic complexity of attention, [Dai et al., 2019] proposes the Transformer-XL that handles the long sequence at segment level by Transformer, with these segments interacting through a recurrence mechanism. The Recurrent Memory Transformer [Bulatov et al., 2022] refines this mechanism by incorporating special memory tokens into the recurrence, which is capable of scaling the context length to the millions [Bulatov et al., 2023]. On the other hand, [Child et al., 2019; Beltagy et al., 2020] proposed using the sparse attention to circumvent the full access to the long sequences, hence reducing the complexity. The sparse attention has been adopted by [Ding et al., 2023a] to scale the context length of transformers into the billions. However, these methods sacrifice the utilisation of the entire sequence during attention, resulting in an inevitable loss of some contextual information. Additionally, modifications to the model architecture make these methods challenging to apply to existing pre-trained LLMs. Conversely, our CLEX serves as a drop-in component for LLMs, can efficiently extend the capacity of models to tack the entire long sequences without explicit drops of context information. Length Extrapolation. Building on the foundation laid by ALiBi [Press et al., 2022], a series of works [Sun et al., 2023; Chi et al., 2022; 2023] seek to train the Transformer-based models on a short length, while directly testing on longer counterparts. These methods substitute the position embedding with bias introduced into attention scores, thereby incorporating positional information. Notably, the bias typically gives higher profits to closer tokens. This mechanism intuitively amplifies the local context for each token at the expense of distant information. Consequently, these length-extrapolation methods encounter challenges in effectively handling long contexts in practical applications [Pal et al., 2023]. However, our CLEX demonstrates remarkable effectiveness in practical tasks such as summarization, indicating the de facto extrapolation ability for applications. Position Embedding (PE) Scaling. Recent research has sought to extend the context length of Transformers through the scaling of the extensively employed RoPE. Specifically, [Chen et al., 2023] proposed position interpolation, a method that efficiently extends the context window by scaling the position index within RoPE. In a similar vein, [Peng et al., 2023; Rozière et al., 2023] opted to scale the frequency basis, achieving superior performance. However, these methods necessitate training (or fine-tuning) on the desired extended length. As a result, they exhibit a limited ability to extrapolate beyond the trained length and even suffer from performance drops within the shorter lengths. In CLEX, we generalise the discrete PE scaling to a continuous counterpart, hence uniformly extrapolating the context length of LLMs while preserving the performance within short lengths. 6 CONCLUSION We have presented the Continuous Length EXtrapolation (CLEX), a novel approach that efficiently extrapolates the context length of Large Language Models (LLMs) to over 4x the training (fine-tuning) length without any decline in performance. CLEX utilises the neural ODE to learn the continuous dynamics over the length scaling factor during PE scaling, hence enabling fine-grained extension for the frequency basis in the RoPE. We conduct thorough experiments to investigate the effectiveness of CLEX compared to a variety of strong LLMs, covering the language modelling task and LongBench benchmark. The experimental results have demonstrated the exceptional extrapolation ability of CLEX, where our CLEX trained with a sequence length of 4k holds the potential to remain competitive to any open-source long-context LLMs (e.g., CodeLLaMA) trained on lengths up to 32k. These results highlight the potential of CLEX as a state-of-the-art approach for efficiently extrapolating the context length of LLMs, paving the way for advancements in long-context applications. By scaling the base model size up, we found CLEX can be correspondingly enhanced and subsequently is capable of extrapolating the model to a longer context length. This property indicates the tempting effectiveness of CLEX in the era of LLMs. ACKNOWLEDGEMENTS This work was substantially supported by DAMO Academy through DAMO Academy Research Intern Program. Shangsong Liang was supported by the National Natural Science Foundation of China (Grant No. 61906219) and the Mohamed bin Zayed University of Artificial Intelligence, United Arab Emirates. REFERENCES Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, Yuxiao Dong, Jie Tang, and Juanzi Li. Longbench: A bilingual, multitask benchmark for long context understanding. 2023. URL https://arxiv.org/abs/2308.14508 Iz Beltagy, Matthew E. Peters, and Arman Cohan. Longformer: The long-document transformer. arXiv:2004.05150, 2020. URL https://arxiv.org/abs/2004.05150 Sidney Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, Usvsn Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel Weinbach. GPT-NeoX-20B: An open-source autoregressive language model. In Proceedings of BigScience Episode #5 – Workshop on Challenges & Perspectives in Creating Large Language Models. Association for Computational Linguistics, May 2022. URL https://aclanthology.org/2022.bigscience-1.9 Aydar Bulatov, Yuri Kuratov, and Mikhail Burtsev. Recurrent memory transformer. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=Uynr3iPhksa Aydar Bulatov, Yuri Kuratov, and Mikhail S. Burtsev. Scaling transformer to 1m tokens and beyond with rmt. 2023. URL https://arxiv.org/abs/2304.11062 Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper_files/paper/2018/file/69386f6bb1dfed68692a24c8686939b9-Paper.pdf Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. Extending context window of large language models via positional interpolation. 2023. URL https://arxiv.org/abs/2306.15595 Ta-Chung Chi, Ting-Han Fan, Peter J Ramadge, and Alexander Rudnicky. Kerple: Kernelized relative positional embedding for length extrapolation. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/37a413841a614b5414b333585e7613b8-Paper-Conference.pdf Ta-Chung Chi, Ting-Han Fan, Alexander Rudnicky, and Peter Ramadge. Dissecting transformer length extrapolation via the lens of receptive field analysis. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, July 2023. URL https://aclanthology.org/2023.acl-long.756 Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. 2019. URL https://arxiv.org/abs/1904.10509 Together Computer. Redpajama: An open source recipe to reproduce llama training dataset, 2023. URL https://github.com/togethercomputer/RedPajama-Data
o2IEmeLL9r
I understand the motivation behind the KL weight and that neither alpha=0 (no prior) nor too large of a weight are desirable. However, it appears that the authors choose to train the high-level policy from scratch and only leverage the goal prior to guide exploration. Given that the goal prior and high-level policy share the same action space, why do the authors decide against initializing the high-level policy as the goal prior and simply finetuning it using the proposed objective (reward + KL)?
Pre-Training Goal-based Models for Sample-Efficient Reinforcement Learning Haoqi Yuan\textsuperscript{1}, Zhancun Mu\textsuperscript{2}, Feiyang Xie\textsuperscript{2}, Zongqing Lu\textsuperscript{1,3†} \textsuperscript{1}School of Computer Science, Peking University \textsuperscript{2}Yuanpei College, Peking University \textsuperscript{3}Beijing Academy of Artificial Intelligence Abstract Pre-training on task-agnostic large datasets is a promising approach for enhancing the sample efficiency of reinforcement learning (RL) in solving complex tasks. We present PTGM, a novel method that pre-trains goal-based models to augment RL by providing temporal abstractions and behavior regularization. PTGM involves pre-training a low-level, goal-conditioned policy and training a high-level policy to generate goals for subsequent RL tasks. To address the challenges posed by the high-dimensional goal space, while simultaneously maintaining the agent’s capability to accomplish various skills, we propose clustering goals in the dataset to form a discrete high-level action space. Additionally, we introduce a pre-trained goal prior model to regularize the behavior of the high-level policy in RL, enhancing sample efficiency and learning stability. Experimental results in a robotic simulation environment and the challenging open-world environment of Minecraft demonstrate PTGM’s superiority in sample efficiency and task performance compared to baselines. Moreover, PTGM exemplifies enhanced interpretability and generalization of the acquired low-level skills. Project page: https://sites.google.com/view/ptgm-iclr/. 1 Introduction Deep reinforcement learning (RL) has achieved great success in solving sequential decision-making tasks (Silver et al., 2016; Vinyals et al., 2019; Hafner et al., 2020). However, many real-world domains such as indoor robotic tasks (Brohan et al., 2023; Myers et al., 2023) and open-world games (Team et al., 2021; Johnson et al., 2016) present significant challenges to RL. The complexity and long-horizon nature of these tasks make it difficult for RL to explore and receive positive rewards, thereby resulting in low sample efficiency. In recent years, we have increasing accessibility to vast datasets of robotic manipulations (Li et al., 2023) and human gameplay videos from the Internet (Baker et al., 2022; Fan et al., 2022). Pre-training on such datasets to improve RL has emerged as an important research topic. These large-scale datasets are often not tailored for specific tasks. For example, VPT (Baker et al., 2022) gathers extensive data of human players playing the open-world game Minecraft, where the players explore freely rather than solve a specific task. There is potential to learn models of agent behaviors (Baker et al., 2022; Ramrakhya et al., 2023) and skills (Pertsch et al., 2021a;b) from these datasets to aid RL. We study pre-training low-level behaviors and skills, and then train high-level policies with RL for downstream tasks. This approach lies in hierarchical RL (Sutton et al., 1999), and provides temporal abstraction for the RL policies, thereby improving sample efficiency. Existing methods (Pertsch et al., 2021a; Rao et al., 2021; Shah et al., 2021; Pertsch et al., 2021b; Shi et al., 2023) study pre-training low-level policies in low-dimensional RL environments (Fu et al., 2020) or narrow robotic domains (Gupta et al., 2020), but they have not scaled to high-dimensional, complex open-world environments (Li et al., 2023; Johnson et al., 2016) and large datasets (Baker et al., 2022; Fan et al., 2022). Some methods (Pertsch et al., 2021a;b; Shi et al., 2023) model low- †Correspondence to Zongqing Lu <zongqing.lu@pku.edu.cn>. level skills with latent variables using variational inference. However, they fail to model the complex action sequences, where the sequence length and the action space are large, in large datasets, e.g., Minecraft (Baker et al., 2022). Recent works (Baker et al., 2022; Lifshitz et al., 2023) reveal that policies with a transformer architecture (Vaswani et al., 2017) trained with behavior cloning on large datasets can effectively model various behaviors in Minecraft. Steve-1 (Lifshitz et al., 2023) trains a goal-conditioned policy that can exhibit various short-term behaviors conditioned on goals in Minecraft. Inspired by this, we introduce Pre-Training Goal-based Models (PTGM) for sample-efficient RL. Our method, PTGM, pre-trains a goal-conditioned policy via behavior cloning and hindsight relabeling on a large task-agnostic dataset. To learn downstream tasks with RL, we train a high-level policy that outputs a goal at each step, with the goal-conditioned policy executing for several time steps in the environment based on this goal. Training RL in a high-dimensional continuous action space is notably sample-inefficient (Lillicrap et al., 2015). If we follow the existing methods (Pertsch et al., 2021a;b) and attempt to mitigate this by reducing the dimensionality of the latent variables in the pre-trained model, the model will fail to learn diverse behaviors in the large dataset due to the decreased capacity. In our approach, the high-dimensional goal space presents a similar challenge. However, the generalization ability of the goal-conditioned policy pre-trained on large datasets enables us to compress the goal space without significantly diminishing the capacity. Consequently, we introduce a clustering approach to transform the goal space into a discrete high-level action space. Additionally, we propose to pre-train a goal prior model which predicts the distribution of future goals given the current state. The goal prior model provides an intrinsic reward with the KL divergence to the high-level policy in RL, regularizing the behavior of the agent to improve exploration. We evaluate our method in a robotic manipulation environment Kitchen (Gupta et al., 2020) which requires solving subtasks sequentially, and the challenging open-world benchmark Minecraft (Fan et al., 2022) which contains diverse long-horizon tasks that are challenging for RL. PTGM outperforms baselines in terms of sample efficiency and success rates. Ablation studies demonstrate the necessity of each component in PTGM. Additionally, we demonstrate that PTGM has advantages in the interpretability and generalizability of the learned low-level skills. In summary, the primary contributions of this work are: • We propose pre-training goal-based models for RL, which holds advantages in the sample efficiency, learning stability, interpretability, and generalization of the low-level skills compared to existing methods. • We propose the method of clustering in the goal space and pre-training the goal prior model, providing effective approaches for enhancing the sample efficiency of training high-level policies given a pre-trained goal-conditioned policy. • Our experimental results validate the effectiveness of our method, demonstrating its capability to learn on diverse domains and solve the challenging Minecraft tasks efficiently. 2 RELATED WORK Pre-Training for RL. This line of research can be categorized into two main settings: pre-training from task-specific datasets and pre-training from large task-agnostic datasets. For the former, imitation learning approaches (Gao et al., 2018; Ramrakhya et al., 2023) pre-train policies for initialization in RL, offline RL approaches (Lee et al., 2022; Zhu et al., 2023) pre-train policies and value functions, and transformers for RL (Wu et al., 2023; Sun et al., 2023; Escontrela et al., 2023; Xie et al., 2023) pre-train policies, transitions and state representations via sequence modeling. For the latter, Sermanet et al. (2018); Laskin et al. (2020); Aytar et al. (2018) pre-train state representations for image observations, Pertsch et al. (2021a;b); Shi et al. (2023); Rosete-Beas et al. (2023) learn low-level policies for temporal abstraction, and other works pre-train intrinsic rewards (Bruce et al., 2022; Zhou et al., 2023) or world models (Yuan et al., 2021; Seo et al., 2022). In this paper, we study pre-training low-level skills from task-agnostic datasets. Recent works (Baker et al., 2022; Lifshitz et al., 2023) demonstrate that imitation learning and goal-conditioned learning on a large task-agnostic dataset can acquire diverse skills effectively, motivating us to pre-train goal-based models from data. Goal-Conditioned RL. Goal-conditioned RL (GCRL) (Liu et al., 2022) solves goal-augmented MDPs (Schaul et al., 2015) to achieve different goals. Schaul et al. (2015); Plappert et al. (2018); McCarthy & Redmond (2021) study training agents that can handle various goals and generalize across different goals in the multi-task learning setting. Andrychowicz et al. (2017); Chane-Sane et al. (2021); Zhu et al. (2021) address the challenges of RL with sparse rewards via GCRL. Nachum et al. (2018); Li et al. (2022) focus on temporal abstraction with goal-conditioned policies, developing hierarchical agents that can operate over high-level goals. Other works study goal representation learning in forms of images (Srinivas et al., 2018; Islam et al., 2022; Lifshitz et al., 2023) and languages (Myers et al., 2023; Cai et al., 2023) for GCRL or formulate offline RL with return-conditioned RL (Chen et al., 2021; Janner et al., 2021). Without RL, some methods perform goal-conditioned learning on datasets to build agents that can follow language instructions (Mezghani et al., 2023) or multi-modal prompts (Jiang et al., 2022). Our study falls within training goal-conditioned policies on datasets, providing temporal abstractions for downstream RL. Hierarchical RL. Hierarchical RL (HRL) leverages temporal abstractions for sample-efficient learning in both single-task (Sutton et al., 1999; Kulkarni et al., 2016) and multi-task settings (Tessler et al., 2017; Veeriah et al., 2021), extensively integrating with model-based RL (Hafner et al., 2022), multi-agent RL (Mahajan et al., 2019), and imitation learning (Sharma et al., 2019b). We focus on methods that pre-train low-level policies for downstream RL. This includes unsupervised skill discovery with information-based objectives (Gregor et al., 2016; Sharma et al., 2019a; Strouse et al., 2021) and training skills from offline datasets (Pertsch et al., 2021a; Rao et al., 2021; Shah et al., 2021; Pertsch et al., 2021b; Shi et al., 2023). In this paper, we leverage a pre-trained goal-conditioned policy to enable temporal abstraction and study methods to improve sample efficiency for training the high-level policy with RL. 3 PRELIMINARIES 3.1 Problem Formulation A task can be formalized as a Markov Decision Process (MDP), defined by a tuple $M = (S, A, P, \rho, R, \gamma)$ representing states, actions, the transition probability of the environment, the initial state distribution, the reward function, and the discount factor. Starting from the initial state, for each time step, the agent performs an action, then the environment transitions to the next state and returns a reward. Reinforcement learning (RL) learns a policy $\pi_\theta(a|s)$ to maximize the discounted cumulative reward $J(\theta) = \mathbb{E}_{\pi_\theta}[\sum_{t=0}^{\infty} \gamma^t R(s_t, a_t)]$. RL optimizes the policy by learning from online collected data $\{(s_t, a_t, r_t, s_{t+1})\}$. For partially observable MDPs (Kaelbling et al., 1998) with observations $o \in O$, we adopt the same notation as MDP, using $s_t$ to represent $o_{0:t}$. We study tasks that are hard in exploration, where it is non-trivial for the agent to reach states that bring high rewards through random exploration. RL exhibits low sample efficiency on such tasks, meaning that it needs to collect a very large number of samples from the environment to improve task success rates. We assume access to a task-agnostic dataset $D = \{\tau = \{(s_i, a_i)\}_{i=0}^{T}\}$ collected in the same environment, in which the action sequences depict the non-trivial behaviors over time (i.e., $P_{\tau \sim D}(a_{t:t+k}|s_t) \neq \Pi_{t:t+k}^{k} P_{\tau \sim D}(a_i|s_i)$) generated by the agent (e.g. human players) while performing various tasks in the environment. Though trajectories in the dataset are sub-optimal for solving downstream tasks, the short-term behaviors $a_{t:t+k}$ in the dataset represent meaningful skills and can be stitched sequentially to accomplish a task (Badrinath et al., 2023). We now formulate pre-training skills to provide temporal abstractions for RL as pre-training a model $P_\phi(a_{t:t+k}|s_t, z_t)$ using $D$, where diverse possible behaviors are modeled in the variable $z \in Z$. To train a task, RL can be performed on a high-level policy $\pi_\theta(z|s)$ which acts on a larger time scale $k$ and the pre-trained model $P_\phi$ decodes $a_{t:t+k}$ to act in the environment. The pre-trained model increases the probability of RL exploring towards task success by compressing the multi-step action space $A^k$ into a compact behavior space $Z$, thereby improving sampling efficiency. 3.2 Goal-conditioned Policy The pre-trained model is fixed during the RL phase, requiring it to have the capacity to model all the behaviors in the dataset and to decode action sequences accurately. We find that previous methods (Pertsch et al., 2021a;b; Shi et al., 2023) that model the low-level behaviors with continuous latent variables struggle to model the complex distribution of action sequences in Minecraft datasets which feature large amounts of data, long action sequences to exhibit certain behaviors, and a large action space (more analysis in Appendix B.2). Recent work (Lifshitz et al., 2023) has demonstrated that training a goal-conditioned policy via behavior cloning with a transformer (Vaswani et al., 2017) architecture can stably learn various behaviors from these data. Therefore, we employ a similar approach to pre-train a goal-conditioned model. A goal refers to the final state of a sub-trajectory in the dataset, representing the outcome of the short-term behavior. The goal-conditioned model $P_\phi$ uses the state space $S$ to be the space $Z$ and predicts the action for one step: $$P(a_{t:t+k}|s_t, s^g) = \int_{s_{t+1},\ldots,s_{t+k}} ds_{t+1} \ldots ds_{t+k} \prod_{i=t}^{t+k} P_\phi(a_i|s_i, s^g)P(s_{i+1}|s_i, a_i),$$ where $s^g \in S$ is the goal state. Starting from $s_t$, $P_\phi$ aims at reaching the goal state $s^g$ after executing the actions sampled from it for $k$ steps. To learn $P_\phi$ from data, we use a variant of hindsight relabeling (Andrychowicz et al., 2017) to label each state-action sample in the dataset with a future state as the goal state. For a $k$-step subsequence $\tau = (s_t, a_t, \ldots, s_{t+k}, a_{t+k})$ in an episode, we label each sample $(s_i, a_i), t \leq i \leq t + k$ with the goal state $s^g = s_{t+k}$. We train $P_\phi$ with behavior cloning, minimizing the negative log-likelihood of action prediction: $$L(\phi) = \mathbb{E}_D [-\log P_\phi(a_i|s_i, s^g)].$$ In practice, to enable the model to reach goals after different time steps, we randomly sample $k$ within a range. When states are compact vectors representing the physical state of objects in the environment, we use the raw state as the goal. When the environment is a POMDP with image observations, we use the embedding from a pre-trained image encoder as the goal. For instance, Steve-1 (Lifshitz et al., 2023) uses the embedding of $o_{t+k-16:t+k}$ from the MineCLIP’s vision encoder (Fan et al., 2022) in Minecraft. 4 METHOD In this section, we present the proposed method PTGM, utilizing the pre-trained goal-conditioned policy $P_\phi(a_t|s_t, s^g)$ to provide temporal abstractions for RL in downstream tasks. In RL, we train a high-level policy $\pi_\theta(s^g|s_t)$ which outputs a goal state to guide the low-level goal-conditioned policy $P_\phi$ to act in the environment for $k$ steps. To enhance the sample efficiency and stability of RL, we propose a goal clustering method and a pre-trained goal prior model. Figure 1 gives an overview of PTGM. 4.1 Clustering in the Goal Space The goals $s^g \in S$ from the high-dimensional state space introduce a high-dimensional continuous action space for the high-level policy, making RL sample-inefficient. To tackle this challenge, we propose to cluster the states in the dataset to discretize the goal space, constructing a discrete action space for the high-level policy. We sample a large set of states from $D$, apply t-SNE (Maaten & Hinton, 2008) to reduce the dimension of states, and apply a clustering algorithm such as K-Means (Lloyd, 1982) to group similar goal states together and output $N$ clusters. The discretized goal space is represented with $G = \{i : s^g_i\}_{i=1}^{N}$, where $s^g_i$ is the goal state of the $i$-th cluster center. This converts the action space of the high-level policy into a discrete action space $A^h = [N]$ and constrains the high-level policy to output goals in the cluster centers. We observe that compressing goal states into the discrete goal space does not significantly decrease the agent’s model capacity to perform various behaviors. The reasons come from two aspects. Firstly, the clustering algorithm groups similar goals together. The cluster center can represent goals in its cluster that correspond to similar agent behaviors. Secondly, the goal-conditioned model pre-trained on the large dataset as the low-level policy can elicit generalizability on goals. The model can extract behavior information in the goal state, thereby generating correct behaviors even when the provided goal is distant from the current environment state. Given the same goal, the model can exhibit diverse behaviors in different states. These claims are substantiated in our experimental results in Section 5.4. Therefore, using the discrete goal space, the low-level policy still has the capacity to cover various behaviors in the dataset. 4.2 Pre-Training the Goal Prior Model In RL, the agent is able to perform smooth and reasonable behaviors within $k$ consecutive steps given a goal. However, the high-level policy lacks prior knowledge to provide reasonable goals, thereby should uniformly explore the goal space to learn the task. We propose to learn this prior knowledge from the dataset by pre-training a goal prior model, improving the sample efficiency and stability of training the high-level policy. The goal prior model $\pi^p_\psi(a^h|s)$ has the same structure as the high-level policy, where $a^h \in A^h$ is the index of the goal cluster centers. This model is trained to predict the distribution of future goals given the current state, using the clustered goal space $G$. Similar to training the goal-conditioned model, we sample states and subsequent goal states $(s_t, s^g_t)$ from the dataset. In the discretized goal space $G$, we match the goal that is closest to $s^g_t$ based on cosine similarity $a^h = \arg\max_{i \in [N]} \left( \frac{s^g_t \cdot s^g_i}{\|s^g_t\| \|s^g_i\|} \right)$. The training objective for the goal prior model is to minimize the negative log-likelihood of goal prediction: $$L(\psi) = \mathbb{E}_D \left[ -\log \pi^p_\psi(a^h|s_t) \right].$$ The pre-trained goal prior model acts as a regularizer for the high-level policy during RL, providing intrinsic rewards that guide the agent’s exploration towards possible goals in the dataset. 4.3 Reinforcement Learning with PTGM Given the goal clusters $G$, the pre-trained low-level policy $P_\phi$, and the goal prior model $\pi^p_\psi$, we proceed with training the high-level policy using RL for downstream tasks. At each time step, the high-level policy $\pi_\theta(a^h|s)$ selects an index of the goal $s^g_{a^h}$ in the clustered goal space. The fixed low-level policy acts in the environment for $k$ steps conditioned on $s^g_{a^h}$. The high-level policy is updated based on the environment rewards and the intrinsic rewards from the goal prior model. The overall objective for training the high-level policy is to maximize the expected return: $$J(\theta) = \mathbb{E}_{\pi_\theta} \left[ \sum_{t=0}^{\infty} \gamma^t \left( \sum_{i=k_t}^{(k+1)t} R(s_i, a_i) - \alpha D_{KL} \left( \pi^p_\psi(a^h|s_{kt}) || \pi_\theta(a^h|s_{kt}) \right) \right) \right],$$ where $t$ represents the number of steps for the high-level policy and $\alpha$ is a hyperparameter balancing the environmental rewards and the intrinsic rewards. By optimizing this objective, the high-level policy learns to select goals that lead to task success and align with the behaviors in the dataset, achieving sample-efficient RL with a discrete action space. In principle, any online RL algorithm can be used to train the high-level policy in downstream tasks. 5 EXPERIMENTS 5.1 ENVIRONMENTS AND DATASETS We setup experiments on the following two challenging benchmarks with long-horizon tasks. More details about the environments and our implementations are presented in Appendix A. **Kitchen.** A simulated robotic manipulation environment based on Gupta et al. (2020), where the agent controls a 7-DoF robot arm to manipulate objects in a kitchen. The dataset is provided in the D4RL benchmark (Fu et al., 2020), consisting of 150K transition samples. In pre-training, we use the environment state as the goal, which is a vector representing poses of the robot and objects. In downstream RL, we train on a long-horizon task consisting of 4 subtasks. The agent receives a sparse, binary reward for each successfully executed subtask. We use the SAC (Haarnoja et al., 2018) algorithm implemented in SPiRL (Pertsch et al., 2021a), replacing the max-entropy objective with minimizing KL-divergence to the goal prior model. **Minecraft.** A popular open-world game that has been regarded as a challenging benchmark for RL (Guss et al., 2019; Baker et al., 2022; Fan et al., 2022). We adopt a video dataset of 39M frames labeled with actions introduced in Baker et al. (2022), which records the human players playing the game. In downstream RL, we use 5 tasks in the MineDojo simulator (Fan et al., 2022) which take thousands of steps to complete and are extremely difficult in exploration. The environment observations are images and the action space is keyboard and mouse operations discretized into $8641 \times 121$ choices. The agent receives a binary task success reward and the MineCLIP reward introduced in Fan et al. (2022). In pre-training, we use the MineCLIP embedding (Fan et al., 2022) of 16 consecutive frames as the goal. Then, we use PPO (Schulman et al., 2017) to train downstream tasks, optimizing a weighted sum of the extrinsic reward and the KL reward. 5.2 EVALUATION To evaluate the performance of PTGM, we compare it to several baselines from recent works. More details on implementing these baselines are presented in Appendix B. **SPiRL (Pertsch et al., 2021a).** This work pre-trains a sequential VAE (Zhu et al., 2020) for $k$-step action sequence $q(z|s_t, a_{t:t+k}), p(a_{t:t+k}|s_t, z)$ along with a skill prior $p_\phi(z|s_t)$, modeling skills with a continuous latent variable $z$. In RL, the high-level policy outputs a skill $z$, and then the action decoder $p$ decodes the action sequence and executes for $k$ steps. For Kitchen, we run the released code where $k = 10$. For Minecraft, there is a trade-off for selecting $k$. If we set $k = 100$ which is the same as PTGM, the model fails to reconstruct the long action sequences accurately. Otherwise, if we keep $k = 10$, the downstream RL can be less sample-efficient since less temporal abstractions are provided. Therefore, we run experiments on both settings, present the best result for each task in the paper, and leave the results of both settings in Appendix D. **TACO (Rosete-Beas et al., 2023).** This work pre-trains a low-level policy $\pi(a_t|s_t, z)$ conditioned on the continuous latent variable $z$ from a learned skill posterior $q(z|\tau)$, regularizes $q$ with KL-divergence to a skill prior $p(z|s_t, s_T)$, and proposes offline RL methods to train the high-level policy. We take an online variant of TACO, where the skill prior $p$ is used to initialize and regularize the high-level policy for online RL. For each downstream task, we manually provide the task goal $s_T$. **VPT-finetune.** VPT (Baker et al., 2022) is a foundation model for Minecraft that trains a behavior-cloning (BC) policy $\pi(a_t|o_{0:t})$ on a game playing dataset of 70K hours from the Internet. For Minecraft tasks, we train RL to either finetune the full model (Baker et al., 2022) or finetune the transformer adapters (Nottingham et al., 2023). We report the better results for each task in the paper and leave the results of both methods in Appendix D. Note that PPO from scratch fails on all downstream tasks in Minecraft according to Fan et al. (2022); Yuan et al. (2023). For Kitchen, we use the same approach to implement a baseline named BC-finetune. Steve-1 (Lifshitz et al., 2023). This work builds an instruction-following agent in Minecraft. It first trains a goal-conditioned policy $\pi(a_t|o_{0:t}, g)$ on the contractor dataset, which is the same as the low-level policy in PTGM. Then, Steve-1 adopts a language-labeled dataset to map instructions to goals. We test the zero-shot performance of Steve-1 in our tasks by providing task instructions. We measure the sample efficiency and task performance for all the methods with the training curves of success rates during RL. Figure 2 shows the performance of PTGM and all the baselines. In Kitchen, PTGM is able to solve 3 subtasks with high probability, exhibiting higher sample efficiency and comparable task success rates to SPiRL. Both PTGM and SPiRL outperform BC-finetune a lot, indicating that training RL with temporal abstraction provided by the pre-trained models improves sample efficiency significantly. In Minecraft, we observe that PTGM is the only method that achieves good success rates on all tasks after training for 1M environment steps, demonstrating its high sample efficiency and the strong capability to complete diverse surface and underground tasks. Results in most tasks show that the sample efficiency of PTGM greatly exceeds SPiRL, TACO, and VPT-finetune. PTGM is also the only method that completes the challenging Iron-ore task, which requires more than 1K exploration steps to obtain the rare item underground. VPT-finetune fails in the Cobblestone and Iron-ore task. We believe one reason is that during RL, the policy without temporal abstraction may quickly forget the skill to repeat the same action consecutively to break a block. For Steve-1, we observe that it has the strong ability to cut trees; however, its performance on the other four tasks falls short. PTGM enhances the capabilities of Steve-1 by learning a high-level policy based on it. We observe that SPiRL performs well in the simple domain Kitchen with small datasets, but struggles to learn challenging Minecraft tasks with large datasets and high-dimensional observations and actions, exhibiting worse performance compared to PTGM. We argue that the reasons for SPiRL underperforming PTGM in Minecraft include: the VAE in SPiRL struggles to reconstruct the long, high-dimensional action sequences; SPiRL trains high-level policies with RL in a continuous action space, while PTGM is capable of encoding rich and generalizable behaviors in a discrete action space, making downstream RL more sample-efficient. We verify the former reason in Appendix C and demonstrate in Section 5.4 that PTGM’s discrete goals exhibit the rich behaviors and generalization capabilities mentioned in the latter reason. Due to the limited size and the lack of diverse behaviors at each state in the Kitchen dataset, it is not easy for PTGM to enhance the goal-conditioned policy with strong generalization capabilities, resulting in a smaller advantage in this task. Figure 3: These figures show the curves of task success rates for the methods in the ablation study. The left figure shows PTGM with different numbers of goal clusters and without clustering (PTGM-no-cluster) in the Log task. The middle figure shows RL with different weights of the KL reward and RL without the goal prior model (PTGM-no-prior) in the Log task. The right figure shows RL with different numbers of low-level steps for each high-level action in the Spider task. 5.3 Ablation Study We conduct the ablation study on the three main components introduced in PTGM: clustering in the goal space, the KL reward provided with the goal prior model, and the temporal abstraction for RL. Figure 3 presents results in the Minecraft Log and Spider tasks. More results are presented in Appendix E. For the number of clusters in the goal space, as shown in Figure 2, PTGM with 100 goal clusters is able to accomplish all the tasks. In Figure 3, with a goal space comprising 10 clusters, the agent fails to improve the task success rates. This is attributed to the non-existence of the tree-chopping behavior within the limited number of cluster centers. However, there remains a probability of task success, as the goal-conditioned policy can generalize and leverage goals associated with attacking other blocks in Minecraft to attack trees. We find that when the number of goal clusters is large, the performance of PTGM is robust to the change of this number. When the number increases to 500 and 5000, the high-level policy can still accomplish the task with high success rates. For $N = 5000$, the training curve rises a bit slower, indicating that the sample efficiency decreases due to the large high-level action space. But it still outperforms PTGM-no-cluster and SPiRL a lot. We find that PTGM-no-cluster, in which the high-level policy should output 512-dimensional continuous actions to match the original goal space, fails on the task. For the intrinsic reward provided with the goal prior model, we find that a proper weight $\alpha$ for the KL reward improves both sample efficiency and task success rates for RL. RL with $\alpha = 0.01$ and 0.05 outperform others and have low variance across different seeds. PTGM-no-prior exhibits a higher training variance than PTGM on the tasks of Log and Iron ore, as shown in Figure 2. We argue that without the KL reward, PTGM-no-prior suffers from inefficient random exploration in the large discrete goal space, resulting in much larger variance across different seeds. We conclude that the KL reward with the goal prior model contributes to both sample efficiency and learning stability for RL. As shown in Figure 3, for experiments with a large KL reward of $\alpha = 0.5$, we find that the task success rate increases slower, which may be attributed to the high-level policy easily converging to the goal prior model and ignoring the task success reward. For the level of temporal abstraction, we observe that when the number of steps for the low-level policy is set to a small value ($k = 10$), PTGM has worse sample efficiency. It is because, under such settings, the number of steps required by the high-level policy to complete the task increases, making the exploration for task success more challenging. When the number of steps exceeds 50, PTGM has great sample efficiency and high success rates, accomplishing the task of combating a spider within 200K environment steps. This illustrates the effectiveness of the goal-conditioned policy pre-trained on the large dataset in successfully completing goals that involve hundreds of steps. In contrast, as detailed in Appendix D, SPiRL with the number of low-level steps set to 100 fails on several tasks. It is worth noting that with a large number of $k = 500$, PTGM converges to a lower performance compared with $k = 100$. We argue that in this case, the high-level policy cannot switch many goals in an episode, making the task performance limited by the ability to solve the whole task conditioned on a single goal with the low-level controller. Figure 2 shows that $k = 100$ is a good choice for all the five Minecraft tasks, where the high-level controller is able to switch about 10 goals per episode. 5.4 Interpretability and Skill Generalization We believe that PTGM is sample-efficient not only because it provides temporal abstraction for RL, but also because it can encode rich behaviors in the compact discrete goal space. In Figure 4, we demonstrate that in the goal clusters, each cluster can represent an interpretable behavior of human players and samples in the same cluster exhibit similar behavior. The discrete goal space contains the behaviors of tree-chopping, mining, exploration, attacking, and building, which can make up the various skills required to play Minecraft. Moreover, a single goal in the discrete goal space can be generalized to perform different skills, instead of representing a single skill only. As shown in Table 1, conditioned on the goal of attacking a sheep, the low-level policy can perform many similar behaviors including killing a sheep, killing a pig, and killing a chicken. Conditioned on the goal of house building, the low-level policy can place a block, collect water buckets, and harvest wool, since all these skills require right-clicking the mouse. Note that the test tasks have different terrains and backgrounds to the corresponding 16 frames of the goal. This demonstrates that, when provided with a single goal, the pre-trained goal-conditioned policy can generalize across various tasks and scenarios. This adaptability enriches the discrete goal space with diverse behaviors, thereby enabling the high-level policy to learn a variety of tasks within this compact goal space. 6 Conclusion In this paper, we introduce PTGM to address the problem of sample-efficient RL in complex environments with large task-agnostic datasets. By employing a combination of the temporal abstraction provided with the pre-trained goal-conditioned policy, clustering in the goal space, and behavior regularization with the goal prior model, PTGM demonstrates superior performance across various tasks, notably in the challenging benchmark of Minecraft. However, we recognize a few limitations and directions for future work. Firstly, the goal space in PTGM is inherently determined by the offline dataset, rendering the acquired skills susceptible to data bias. The agent may struggle to execute unseen skills, such as milking a cow in Minecraft, as the used dataset contains few instances of such behaviors. In the future, we plan to use larger Internet-scale datasets to enhance the capabilities of PTGM. Secondly, the efficacy of goal clustering relies on a good state representation that can compactly represent goals, especially in environments with image observations. We leave better goal representation learning for PTGM to future work. | Test task | Sheep | Pig | Chicken | |-----------|-------|-----|---------| | Success rate | 0.82 | 0.36 | 0.94 | | Test task | Place | Water | Wool | |-----------|-------|-------|------| | Success rate | 0.65 | 0.16 | 0.44 | Table 1: In each row, we pick a goal from the goal clusters and test the goal-conditioned policy in three tasks conditioned on this goal. For the first two rows, in the 16 frames corresponding to the goal, the agent is attacking a sheep. For the last two rows, in the 16 frames, the agent is building a house. In each test task, the target item (mobs or water) is initialized in front of the agent and we test for 100 episodes. The table shows the success rates. ACKNOWLEDGMENTS This work was supported by NSFC under grant 62250068. REFERENCES Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, OpenAI Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. *Advances in neural information processing systems (NeurIPS)*, 2017. Yusuf Aytar, Tobias Pfaff, David Budden, Thomas Paine, Ziyu Wang, and Nando De Freitas. Playing hard exploration games by watching youtube. *Advances in neural information processing systems*, 31, 2018. Anirudhan Badrinath, Yannis Flet-Berliac, Allen Nie, and Emma Brunskill. Waypoint transformer: Reinforcement learning via supervised learning with intermediate targets. *arXiv preprint arXiv:2306.14069*, 2023. Bowen Baker, Ilge Akkaya, Peter Zhokov, Joost Huizinga, Jie Tang, Adrien Ecoffet, Brandon Houghton, Raul Sampedro, and Jeff Clune. Video pretraining (vpt): Learning to act by watching unlabeled online videos. *Advances in Neural Information Processing Systems (NeurIPS)*, 2022. Anthony Brohan, Yevgen Chebotar, Chelsea Finn, Karol Hausman, Alexander Herzog, Daniel Ho, Julian Ibarz, Alex Irpan, Eric Jang, Ryan Julian, et al. Do as i can, not as i say: Grounding language in robotic affordances. In *Conference on Robot Learning (CORL)*, 2023. Jake Bruce, Ankit Anand, Bogdan Mazoure, and Rob Fergus. Learning about progress from experts. In *International Conference on Learning Representations (ICLR)*, 2022. Shaofei Cai, Zihao Wang, Xiaojian Ma, Anji Liu, and Yitao Liang. Open-world multi-task control through goal-aware representation learning and adaptive horizon prediction. *arXiv preprint arXiv:2301.10034*, 2023. Elliot Chane-Sane, Cordelia Schmid, and Ivan Laptev. Goal-conditioned reinforcement learning with imagined subgoals. In *International Conference on Machine Learning (ICML)*, 2021. Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning via sequence modeling. *Advances in neural information processing systems (NeurIPS)*, 2021. Alejandro Escontrela, Ademi Adeniji, Wilson Yan, Ajay Jain, Xue Bin Peng, Ken Goldberg, Young-woon Lee, Danijar Hafner, and Pieter Abbeel. Video prediction models as rewards for reinforcement learning. *arXiv preprint arXiv:2305.14343*, 2023. Linxi Fan, Guanzhi Wang, Yunfan Jiang, Ajay Mandlekar, Yuncong Yang, Haoyi Zhu, Andrew Tang, De-An Huang, Yuke Zhu, and Anima Anandkumar. MineDojo: Building open-ended embodied agents with internet-scale knowledge. In *Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track*, 2022. Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4rl: Datasets for deep data-driven reinforcement learning. *arXiv preprint arXiv:2004.07219*, 2020. Yang Gao, Huazhe Xu, Ji Lin, Fisher Yu, Sergey Levine, and Trevor Darrell. Reinforcement learning from imperfect demonstrations. *arXiv preprint arXiv:1802.05313*, 2018. Karol Gregor, Danilo Jimenez Rezende, and Daan Wierstra. Variational intrinsic control. *arXiv preprint arXiv:1611.07507*, 2016. Abhishek Gupta, Vikash Kumar, Corey Lynch, Sergey Levine, and Karol Hausman. Relay policy learning: Solving long-horizon tasks via imitation and reinforcement learning. In *Conference on Robot Learning (CORL)*, 2020.
9nXgWT12tb
I think the most fair comparison is transformer vs transformer +CBA where the transformer has the same number of heads as the transformer +CBA (when we count both temporal attention and correlated attention heads). Does the transformer in Table 2 has exact same head as the transformer +CBA?
CORRELATED ATTENTION IN TRANSFORMERS FOR MULTIVARIATE TIME SERIES Anonymous authors Paper under double-blind review ABSTRACT Multivariate time series (MTS) analysis prevails in real-world applications such as finance, climate science and healthcare. The various self-attention mechanisms, the backbone of the state-of-the-art Transformer-based models, efficiently discover the temporal dependencies, yet cannot well capture the intricate cross-correlation between different features of MTS data, which inherently stems from complex dynamical systems in practice. To this end, we propose a novel correlated attention mechanism, which not only efficiently captures feature-wise dependencies, but can also be seamlessly integrated within the encoder blocks of existing well-known Transformers to gain efficiency improvement. In particular, correlated attention operates across feature channels to compute cross-covariance matrices between queries and keys with different lag values, and selectively aggregate representations at the sub-series level. This architecture facilitates automated discovery and representation learning of not only instantaneous but also lagged cross-correlations, while inherently capturing time series auto-correlation. When combined with prevalent Transformer baselines, correlated attention mechanism constitutes a better alternative for encoder-only architectures, which are suitable for a wide range of tasks including imputation, anomaly detection and classification. Extensive experiments on the aforementioned tasks consistently underscore the advantages of correlated attention mechanism in enhancing base Transformer models, and demonstrate our state-of-the-art results in imputation, anomaly detection and classification. 1 INTRODUCTION Multivariate time series (MTS) are time series encompassing multiple dimensions for capturing different features of the original data, where each dimension corresponds to a univariate time series. MTS analysis is ubiquitous in real-world applications such as imputation of missing data in geoscience (López et al., 2021), anomaly detection of monitoring data in aeronautics (Hundman et al., 2018b), classification of heartbeat data for fetal assessment (Kampouraki et al., 2009), and weather prediction (Wu et al., 2022b). Thanks to its immense practical value, there has been increasing interest in MTS analysis (Wen et al., 2023; Wu et al., 2023; Lim & Zohren, 2021; Zhang & Yan, 2023). The recent advancement of deep learning has facilitated the development of many models with superior performance (Li et al., 2021b; Wu et al., 2023). Specifically, the large class of Transformer-based models (Wen et al., 2023; Wu et al., 2022b; Zhang & Yan, 2023; Zhou et al., 2022; Liu et al., 2022; Vaswani et al., 2017; Du et al., 2023b) is the most prominent and has demonstrated great potential for their well-known capability to model both short-range and long-range temporal dependencies (Wen et al., 2023). In addition to temporal dependencies, feature-wise dependencies, which are cross-correlation between the variates of MTS, are central to MTS analysis (Cao et al., 2020) and studied in the deep learning literature via convolution neural network (CNN) (Lai et al., 2018) or graph neural network (GNN) (Wu et al., 2020; Cao et al., 2020). Nevertheless, for existing Transformer-based models (e.g. Li et al., 2019; Zhou et al., 2021; Wu et al., 2022b), the embedding method is insufficient for capturing such cross-correlation between different variates of MTS (Zhang & Yan, 2023), which motivated the authors therein to propose CrossFormer as the first Transformer explicitly utilizing feature-wise dependencies for MTS forecasting. Despite its promising performance, CrossFormer deploys a convoluted architecture, which is isolated from other prevalent Transformers with their own established merits in temporal modelling and specifically designed for only MTS forecasting. thereby lacking flexibility. Consequently, it remains under-explored whether modelling feature-wise dependencies could also improve Transformer-based models’ performances in other non-predictive tasks, which cover a wide range of real-world applications and include prominently imputation, anomaly detection and classification. Moreover, all the previous work (Wu et al., 2020; Cao et al., 2020; Zhang & Yan, 2023) on capturing feature-wise dependencies in MTS analysis are limited in scope to forecasting, rely on ad-hoc mechanisms in their rigid pipelines, and thus do not fully leverage the capability to model temporal dependencies of existing powerful Transformers. Motivated by the nascent literature of the aforementioned problems and the success of Transformer-based models in MTS analysis, we raise the following central question of this paper: **How can we seamlessly elevate the broad class of existing and future Transformer-based architectures to also capture feature-wise dependencies? Can modelling feature-wise dependencies improve Transformers’ performance on non-predictive tasks?** We affirmatively answer this question by proposing a novel correlated attention mechanism that efficiently learns the cross-correlation between different variates of MTS and can be seamlessly integrated with the encoder-only architecture of well-known Transformers, thereby being applicable to a wide range of non-predictive tasks. In addition to the conventional cross-correlation, the correlated attention captures simultaneously auto-correlation, the backbone of Autoformer (Wu et al., 2022b), and lagged cross-correlation. Lagged cross-correlation has been inherently critical in MTS data (John & Ferbinteanu, 2021; Chandereng & Gitter, 2020), yet vastly ignored by the literature of Transformer-based models. For raw MTS data of production planning (e.g., Contreras-Reyes & Idrovo-Aguirre, 2020) as an example, it may take some lagged interval for the increase in the demand rate to be reflected in the production rate. Instead of the usual temporal dimension, correlated attention operates across feature channels to compute cross-covariance matrices of between queries and keys with different lag values, and further select the pairs with highest correlations for aggregating representations at the sub-series level. For seamless integration with the encoder block of base Transformers such as (Vaswani et al., 2017; Liu et al., 2022) with their respective temporal attentions, the original multi-head attention is modified to include the heads using both the temporal attentions from the base model and our correlated attentions. This design directly augments the embedded layer of the base Transformer with cross-correlation information in its representation learning. Experimentally, correlated attention, when plugged into prevalent Transformer baselines, consistently boosts the performance of the base models and results in state-of-the-art benchmark for Transformer-models in various tasks. The contributions of the paper can be summarized as follows: - We propose a novel correlated attention mechanism that efficiently learns both the instantaneous and lagged cross-correlations between different variates of MTS, as well as auto-correlation of series. To the best of our knowledge, this is the first work that presents a Transformer architecture that aims to explicitly learn the lagged cross-correlation. - Correlated attention is flexible and efficient, where it can be seamlessly plugged into encoder-only architectures of well-known Transformers such as (Vaswani et al., 2017; Liu et al., 2022) to enhance the performance of the base models. It naturally augments the embedded layer of base Transformers, having been known vastly for temporal modelling (Zhang & Yan, 2023), with feature-wise dependencies. Furthermore, the modularity of correlated attention will permit its adoption in and benefit future Transformer architectures. - Extensive experiments on imputation, anomaly detection and classification demonstrate that correlated attention consistently improves the performance of base Transformers and results state-of-the-art architectures for the aforementioned tasks. ## 2 RELATED WORK ### Multivariate Time Series Analysis. The surge of advanced sensors and data stream infrastructures has led to the tremendous proliferation of MTS data (Wen et al., 2022; Estling & Agon, 2012). In response, MTS analysis, which spans a multitude of tasks including but not limiting to imputation (Du et al., 2023b), anomaly detection (Blázquez-García et al., 2020), classification (Fawaz et al., 2019) and forecasting (Lim & Zohren, 2021), has been increasingly crucial. In recent years, many deep learning models have been proposed for MTS analysis and achieved competitive performance (Lai et al., 2018; Franceschi et al., 2020; Wen et al., 2023; Gu et al., 2022). Specifically, multilayer perceptron (MLP) methods (Oreshkin et al., 2020; Challu et al., 2022) adopt MLP blocks for modelling temporal dependencies. Temporal Convolutional Networks (TCNs) (Lea et al., 2016; Franceschi et al., 2020) leverage CNN or recurrent neural network (RNN) along the temporal dimension to capture temporal dependencies. RNN-based models (Hochreiter & Schmidhuber, 1997; Lar et al., 2018) use state transitions and recurrent structure to model temporal variations. In order to capture cross-correlation, recent work (Yu et al., 2018; Cao et al., 2020; Wu et al., 2020) deploy GNNs to directly model cross-dimension dependencies. Nevertheless, these neural networks rely on RNN and CNN to model temporal dynamics, which are known to be inefficient in capturing long-range temporal dependencies (Zhang & Yan, 2023). TimesNet (Wu et al., 2023) models temporal 2D-variation for both intraperiod and interperiod variations via residual structure TimesBlock. Transformers in MTS Analysis. Originating from natural language processing (NLP) domain, Transformers (Vaswani et al., 2017) have shown great success when adapted to MTS analysis (Zhou et al., 2022; Li et al., 2019; Zhou et al., 2021; Liu et al., 2022; Wu et al., 2022b; Du et al., 2023b) thanks to their capability to capture both short-range and long-range temporal dependencies (Wen et al., 2023). Recently, Liu et al. (2022) performed series stationarization to attenuate time series non-stationarity. Wu et al. (2022b) proposed Autoformer with decomposition architecture and auto-correlation mechanism for better modelling of long-range temporal dependencies. Crossformer (Zhang & Yan, 2023) uses dimension-segment-wise embedding and a hierarchical architecture to better learn both the cross-time and cross-dimension dependencies. Modelling Cross-correlation in Time Series. Capturing feature-wise dependencies in MTS analysis has been a long lasting problem, where such cross-correlation in MTS data stems from natural processes (Li et al., 2021a) and complex cyber-physical systems (CPSs) (Wu et al., 2021; Cirstea et al., 2018). Accurate forecasting of correlated MTS can reveal the underlying dynamics of the system including trend and intrinsic behavior (Yang et al., 2013a), and detect outliers (Kieu et al., 2018). To capture the MTS correlation, previous work have proposed the adoptions of hidden Markov models (Yang et al., 2013b) and spatio-temporal (ST) graphs (Cirstea et al., 2021) as the modeling primitives, specialized neural network architectures for correlated MTS forecasting (Wu et al., 2021; Cirstea et al., 2018), and methods based on cross-correlation analysis (Yuan et al., 2016; Kristoufek, 2014). Nevertheless, most of these approaches focused on either forecasting with ST correlation, which arises from the proximity of the MTS sensors’ locations and is only applicable to CPSs, or ad-hoc MTS analysis. Lai et al. (2018) models long and short term temporal patterns with deep neural networks in MTS forecasting. Crossformer (Zhang & Yan, 2023) was the first Transformer-based architecture that explicitly utilizes both temporal and feature-wise dependencies for MTS forecasting. Yet, for non-predictive tasks such as imputation, anomaly detection and classification, there has been no Transformer with specialized modelling of feature-wise dependencies. Moreover, while lagged cross-correlation is inherent in MTS data, for which various statistical tools (John & Ferbinteanu, 2021; Chandereng & Gitter, 2020; Probst et al., 2012; Shen, 2015) have been developed for testing and analysis, time series Transformers in the literature have not leveraged this information in their mechanisms to improve performance of target applications. 3 METHODOLOGY In this Section, we first review the two representative well-known temporal attention mechanisms, namely the self-attention (Vaswani et al., 2017) and de-stationary attention (Liu et al., 2022), and the multi-head attention architecture commonly used in a wide range of Transformer-based models such as (Vaswani et al., 2017; Liu et al., 2022; Du et al., 2023a; Zhou et al., 2021; Wu et al., 2022b) and more. Next, we discuss the current limitation of conventional temporal attentions in modelling feature-wise dependencies. This then motivates us to propose the correlated attention mechanism, which operates across the feature channels for learning cross-correlation among variates, and combine it with existing temporal attentions in the mixture-of-head attention architecture to improve the performance of the base Transformers. 3.1 BACKGROUND Self-attention. Self-attention, first proposed in the vanilla Transformer (Vaswani et al., 2017), operates on the query, key and value matrices. In particular, given the input the matrix $X \in \mathbb{R}^{T \times d}$, where $T$ is the sequence length and $d$ is feature dimension of the model, the model linearly projects $X$ into queries, keys and values respectively as $Q = XW^Q$, $K = XW^K$ and $V = XW^V$, where $W^Q \in \mathbb{R}^{d_x \times d_k}$, $W^K \in \mathbb{R}^{d_x \times d_k}$ and $W^V \in \mathbb{R}^{d_x \times d_v}$ are parameter matrices. Taking queries $Q$, keys $K$ and values $V$ as input, the self-attention returns the output matrix as follows: $$\text{Self-Attention}(Q, K, V) = \text{softmax}\left(\frac{1}{\sqrt{d_k}} QK^\top\right)V.$$ (1) The computational complexity of self-attention is $O(d_kT^2)$ due to pairwise interactions along the time dimension $T$. **De-stationary Attention.** To handle non-stationary real-world MTS data, Non-stationary Transformer (Liu et al., 2022) performs series stationarization for better predictability and adopts the de-stationary attention mechanism to alleviate the over-stationarization and recover the intrinsic information into temporal dependencies. Specifically, after the normalization module, Non-stationary Transformer operates over the stationarized series $X' = (X - 1\mu_X)/\sigma_X$ with the mean vector $\mu_X$ and covariance $\sigma_X$, and obtain the stationarized queries, keys and values respectively as $Q' = (K - 1\mu_Q)/\sigma_X$, $K' = (K - 1\mu_K)/\sigma_X$ and $V' = (V - 1\mu_V)/\sigma_X$ with the mean vectors $\mu_Q$, $\mu_K$ and $\mu_V$. Then, it can be proven that (Liu et al., 2022): $$\text{softmax}\left(\frac{1}{\sqrt{d_k}} QK^\top\right) = \text{softmax}\left(\frac{1}{\sqrt{d_k}} (\sigma_X^2 Q'K'^\top + 1\mu_Q^\top K^\top)\right),$$ which motivates their design of de-stationary attention utilizing multilayer perceptron (MLP) layer to directly learn the positive scaling scalar $\xi \approx \sigma_X^2$ and shifting vector $\Delta \approx K\mu_Q$, and returning the output matrix: $$\text{De-stationary-Attention}(Q', K', V') = \text{softmax}\left(\frac{1}{\sqrt{d_k}} (\xi Q'K'^\top + 1\Delta^\top)\right)V'.$$ (2) The computational complexity of de-stationary attention is $O(d_kT^2)$ without accounting for the MLP module. While there have been a multitude of other temporal attention mechanisms (e.g., Zhou et al., 2021; Du et al., 2023b; Zhou et al., 2022) that usually follow ad-hoc design for specific tasks, the two representative attention mechanisms above are the backbones of some of most primitive Transformers that have robust and competitive performances on a variety of tasks. Next, we present the multi-head attention module, which adopts the temporal attention as its component and commonly used in a wide range of Transformer-based models (e.g., Vaswani et al., 2017; Liu et al., 2022; Du et al., 2023a; Zhou et al., 2021). **Multi-head Attention.** Multi-head attention, proposed along with self-attention in the vanilla Transformer (Vaswani et al., 2017), combines multiple temporal attentions to jointly attend to information from different representation subspaces. In particular, it concatenates $h$ heads, where each head is the output from some temporal attention and $h$ is a hyperparameter, and then performs linear projection for the final output. Formally, multi-head attention is written as follows: $$\text{Multi-head-Attention}(X) = \text{concat}(\text{head}_1, \text{head}_2, ..., \text{head}_h)W^O$$ where $\text{head}_i = \text{Temporal-Attention}(XW^Q_i, XW^K_i, XW^V_i).$ In the Equation[3], $W^O \in \mathbb{R}^{hd_v \times d}$ is parameter matrix and $\text{Temporal-Attention}$ can take the form of any mechanism, such as the two aforementioned self-attention and de-stationary attention, or any other in the literature (Vaswani et al., 2017; Liu et al., 2022; Du et al., 2023a; Zhou et al., 2021). ### 3.2 Correlated Attention Block and Mixture-of-head Attention In this Section, we first take a deeper look at how the design of self-attention (or more generally temporal attention) can limit its capability of modeling feature-wise dependencies, while approaches in the literature of Transformers’ attention design may be insufficient to capture the cross-correlation in MTS. This motivates us to propose the correlated attention block (CAB) to efficiently learn the feature-wise dependencies and can be seamlessly plugged into ubiquitous encoder-only Transformer architectures for performance improvement. Next, we demonstrate how the computation for CAB can be further accelerated via Fast Fourier Transform (FFT) thanks to the Cross-correlation Theorem. 3.2.1 Limitation of Temporal Attention One interpretation for the powerful temporal modeling capacity of Transformers is that, with the queries \( Q = [q_1, q_2, \ldots, q_T]^\top \) and keys \( K = [k_1, k_2, \ldots, k_T]^\top \) expressed in time-wise dimension, the matrix \( QK^\top \in \mathbb{R}^{T \times T} \) in the computation of self-attention (Equation 1) contains pairwise inner-products \( q_i^\top k_j \) of time-dimension vectors, and thus intuitively resembles the notion of correlation matrix between different time points of MTS data. Nevertheless, feature-wise information, where each of the \( d_k \) features corresponds to an entry of \( q_i \in \mathbb{R}^{d_k \times 1} \) or \( k_j \in \mathbb{R}^{d_k \times 1} \), is absorbed into such inner-product matrix; this thus makes self-attention unable to explicitly leverage the feature-wise information in its representation learning. In the context of computer vision, Efron et al. (2021) considered a cross-covariance attention mechanism that instead computes \( \hat{K}^\top \hat{Q} \in \mathbb{R}^{d_k \times d_k} \), where \( \hat{K} \) and \( \hat{Q} \) are \( \ell_2 \)-normalized versions of \( K \) and \( Q \), as the cross-covariance matrix along the feature dimension. However, while this simple design is suitable for capturing instantaneous cross-correlation in static image applications as considered therein, it is insufficient to capture the cross-correlation of MTS data which is coupled with the intrinsic temporal dependencies. In particular, the variates of MTS data can be correlated with each other, yet with a lag interval—this phenomenon is referred to as lagged cross-correlation in MTS analysis (John & Ferbinteanu, 2021; Chandereng & Gitter, 2020; Probst et al., 2012; Shen, 2015). Additionally, a variate in MTS data can even be correlated with the delayed copy of itself, the phenomenon of which is termed auto-correlation. Wu et al. (2022b) proposed Autoformer with the auto-correlation mechanism, but their rigid framework is specifically designed for and achieves competitive performance in long-term forecasting. Given the nascent literature of modules to augment a broad class of powerful Transformers with yet less-efficient modelling capabilities of cross-correlation and auto-correlation, we hereby aim to derive a flexible and efficient correlated attention mechanism that can elevate existing Transformer-based models. 3.2.2 Correlated Attention Block We proceed to present our correlated attention block (CAB), which is comprised of three consecutive components: normalization (Equation 4), lagged cross-correlation filtering (Equation 5), and score aggregation (Equation 6). **Normalization.** In the normalization step, we perform column-wise \( \ell_2 \) normalization of \( Q \) and \( K \), respectively resulting in \( \hat{Q} \) and \( \hat{K} \) as: \[ \hat{Q} = \text{NORMALIZE}(Q), \quad \hat{K} = \text{NORMALIZE}(K). \] (4) **Lagged Cross-correlation Filtering.** We first present the overview of the lagged cross-correlation filtering step as follows: \[ l_1, l_2, \ldots, l_k = \arg\max_{l \in [1, T-1]} \left\{ \lambda \cdot \text{DIAGONAL}\left(\text{ROLL}(\hat{K}, l)^\top \hat{Q}\right) + (1 - \lambda) \cdot \text{NON-DIAGONAL}\left(\text{ROLL}(\hat{K}, l)^\top \hat{Q}\right) \right\}, \] (5) where \( \lambda \in [0, 1] \) is a learnable parameter and \( \arg\max(.) \) is used to select the \( k = c \lfloor \log(T) \rfloor \) (with \( c \) being a hyperparameter) time lags which incur the highest cross-correlation scores to be described in more details now. The purpose of the previous normalization step is to unify the feature-wise variates into the same scale, so that \( \text{ROLL}(\hat{K}, l)^\top \hat{Q} \) can better serve as a notion of cross-correlation matrix in feature-wise dimension between that queries \( \hat{Q} \) and the lagged keys \( \text{ROLL}(\hat{K}, l) \). Here, for \( X \in \mathbb{R}^{T \times d_k} \), the \( \text{ROLL}(X, l) \) operation shifts the elements of \( X \) vertically, i.e. along the time-dimension, during which entries shifted over the first position are then re-introduced at the last position. This rolling operation helps generating lagged series representation. In order to formally define our lagged cross-correlation filtering step (Equation 5), we hereby consider the two operations \( \text{DIAGONAL}(.) \) and \( \text{NON-DIAGONAL}(.) \) on square matrix that respectively sum up the absolute values... of diagonal entries and non-diagonal entries. Specifically, given a matrix \( A \in \mathbb{R}^{d_k \times d_k} \), we then have: \[ \text{DIAGONAL}(A) = \sum_{i=1}^{d_k} |A_{ii}|, \] \[ \text{NON-DIAGONAL}(A) = \sum_{i,j \in [1,d_k]: i \neq j} |A_{ij}|. \] Recall from stochastic process theory (Chatfield, 2004; Papoulis, 1965) that for any real discrete-time process \( \{X_t\} \), its auto-correlation \( R_{X,X}(l) \) can be computed by \[ R_{X,X}(l) = \lim_{L \to \infty} \frac{1}{L} \sum_{t=1}^{L} X_t X_{t-l}. \] With the normalized queries \( \hat{Q} = [\hat{q}_1, \hat{q}_2, ..., \hat{q}_{d_k}] \) and normalized keys \( \hat{K} = [\hat{k}_1, \hat{k}_2, ..., \hat{k}_{d_k}] \) expressed in feature-wise dimension where \( \hat{q}_i, \hat{k}_j \in \mathbb{R}^{T \times 1} \), any \( i \)-th diagonal entry of \( \text{ROLL}(\hat{K}, l)^{\top} \hat{Q} \) takes the form \[ (\text{ROLL}(\hat{K}, l)^{\top} \hat{Q})_{ii} = R_{\hat{q}_i,\hat{k}_i}(l) = \sum_{t=1}^{T} (\hat{q}_i)_t \cdot (\hat{k}_i)_{t-l} \] and thus can serve as an approximation (with multiplicative factor) for the auto-correlation of variate \( i \). This idea was also harnessed in the design of auto-correlation attention (Wu et al., 2022b). Consequently, given a lag \( l \), the quantity \( \text{DIAGONAL}(\text{ROLL}(\hat{K}, l)^{\top} \hat{Q}) \), which aggregates over the absolute values of all diagonal entries, scores the total auto-correlation of all the feature variates, while the quantity \( \text{NON-DIAGONAL}(\text{ROLL}(\hat{K}, l)^{\top} \hat{Q}) \) scores the total cross-correlation between different pairs of feature variates. The final cross-correlation score incurred by time lag \( l \) is then the weighted (convex) combination of \( \text{DIAGONAL}(\text{ROLL}(\hat{K}, l)^{\top} \hat{Q}) \) and \( \text{NON-DIAGONAL}(\text{ROLL}(\hat{K}, l)^{\top} \hat{Q}) \) with a learnable weight \( \lambda \) as shown in Equation 5. For high-dimensional MTS data where not all pairs of variates are highly correlated and/or auto-correlation is the more significant factor, the learnable parameter \( \lambda \) helps automatically untangle such relations and balance the representation learning between auto-correlation and cross-correlation of interacting features. Then \( k = c \log(T) \) (with \( c \) being a hyperparameter) time lags \( l_1, l_2, ..., l_k \), which get the highest cross-correlation scores, are selected through the TopK operation to be used in the next step. **Score Aggregation.** Finally, the CAB performs sub-series aggregation for the final output via: \[ \text{CORRELATED-ATTENTION}(Q, V, K) = (1 - \beta) \cdot \text{ROLL}(V, 0) \cdot \text{SOFTMAX}\left(\frac{1}{\tau} \text{ROLL}(\hat{K}, 0)^{\top} \hat{Q}\right) \] \[ + \beta \cdot \sum_{i=1}^{k} \text{ROLL}(V, l_i) \cdot \text{SOFTMAX}\left(\frac{1}{\tau} \text{ROLL}(\hat{K}, l_i)^{\top} \hat{Q}\right), \] (6) where \( \beta \in [0, 1] \) and \( \tau > 0 \) are learnable parameters. In particular, for every chosen lag \( l_i \), we also roll the values matrix \( V \) by \( l_i \) to align similar sub-series with the same phase position. Then, each \( \text{ROLL}(V, l_i) \cdot \text{SOFTMAX}\left(\frac{1}{\tau} \text{ROLL}(\hat{K}, l_i)^{\top} \hat{Q}\right) \) is a convex combination in feature dimension (as opposed to time dimension in self-attention in Equation 1) of the corresponding token embedding in the delayed values \( \text{ROLL}(V, l_i) \). The final score aggregation in Equation 6 is the weighted (convex) combination of the “instantaneous” score \( \text{ROLL}(V, 0) \cdot \text{SOFTMAX}\left(\frac{1}{\tau} \text{ROLL}(\hat{K}, 0)^{\top} \hat{Q}\right) \) and the “lagged” total score \( \sum_{i=1}^{k} \text{ROLL}(V, l_i) \cdot \text{SOFTMAX}\left(\frac{1}{\tau} \text{ROLL}(\hat{K}, l_i)^{\top} \hat{Q}\right) \) with a learnable weight \( \beta \). **Efficient computation of CAB.** In its current form, the computation complexity of CAB is \( O(d_k^2 T^2) \). Specifically, for every lag \( l \), the computation of \( \text{ROLL}(\hat{K}, l)^{\top} \hat{Q} \) takes \( O(d_k^2 T) \) time. With our choice of \( k = O(\log(T)) \), Equation 6 takes \( O(d_k^2 T \log(T)) \) time. Nevertheless, since Equation 6 requires iterating over all \( T - 1 \) lags \( l \in [1, T - 1] \), each of which costs \( O(d_k^2 T) \), the total complexity is \( O(d_k^2 T^2) \). We hereby present how to alleviate the computation in Equation 6 via FFT, thereby resulting in the accelerated complexity of \( O(d_k^2 T \log(T)) \). This is enabled via the Cross-correlation Theorem (Lahiri, 2016), which, given two finite discrete time series \( \{X_t\} \) and \( \{Y_t\} \), permits the sliding inner product \( (X \star Y)(l) = \sum_{t=1}^{T} X_{t-l} Y_t \) of different lag values \( l \in [0, T - 1] \) being computed efficiently via FFT as: \[ S_{XY}(f) = \mathcal{F}(X_t) \mathcal{F}^*(Y_t) = \int_{-\infty}^{+\infty} X_t e^{-i2\pi ft} dt \int_{-\infty}^{+\infty} Y_t e^{-i2\pi ft} dt \] \[ (X \star Y)(l) = \mathcal{F}^{-1}(S_{XY}(f)) = \int_{-\infty}^{+\infty} S_{XY}(f) e^{i2\pi fl} df, \] (7) for \( l \in [0, T - 1] \), where \( F \) and \( F^{-1} \) are FFT and FFT inverse, and \( * \) is the conjugate operation. Particularly, given \( K, Q \in \mathbb{R}^{T \times d_k} \), we first compute \( F(K), F(Q) \in \mathbb{R}^{(T/2+1) \times d_k} \) in the frequency domain. Let \( F(.)_i \) be the \( i^{th} \) column of these FFTs. We then compute \( F(K)_i F^*(Q)_j \) for all \( i, j \in [1, d_k] \). Finally, the inverse FFTs of these products would give \( F^{-1}(F(K)_i F^*(Q)_j) = [(ROLL(K, 0)^T Q)_{ij}, (ROLL(K, 1)^T Q)_{ij}, ..., (ROLL(K, T - 1)^T Q)_{ij}] \) for \( i, j \in [1, d_k] \). Thus, we can gather data to obtain \( ROLL(K, l)^T Q \) for all \( l \in [0, T - 1] \). As each of FFT and inverse FFT takes \( O(T \log(T)) \), CAB achieves the \( O(d_k^2 T \log(T)) \) complexity. We note that the cross-correlation computation required by CAB is more complicated and strictly subsumes auto-correlation and the invoked Cross-correlation Theorem is more generalized version of the Wiener–Khinchin Theorem, as used by (Wu et al., 2022b) for auto-correlation computation. **Differences Compared to Autoformer.** Since the CAB aims to capture the lagged cross-correlation, which is relevant to yet more generalized than the auto-correlation module in Autoformer, we believe it is crucial to emphasize the main differences. First, Autoformer overall is a decomposed encoder-decoder architecture proposed for long-term forecasting, so its auto-correlation module is specifically designed to work with series seasonality extracted from various series decomposition steps of Autoformer. On the other hand, CAB ensures flexibility with any input series representation by deploying normalization step and learnable temperature coefficient \( \lambda \) reweighting the correlation matrices. Second, while Autoformer computes purely auto-correlation scores and aggregates their exact values for TopK, CAB computes cross-correlation matrices and aggregates the absolute values of such entries for TopK in Equation 5 (as correlation can stem from either positive or negative correlation). Finally, to facilitate robustness to different input series representation, CAB adopts learnable weights \( \lambda \) in TopK operation, which balances between auto-correlation and cross-correlation, and \( \beta \) in sub-series aggregation, which balances between instantaneous and lagged cross-correlation. ### 3.2.3 Mixture-of-head Attention For seamless integration of CAB with a broad class of encoder-only Transformer architectures using multi-head attention component (e.g., Vaswani et al., 2017; Liu et al., 2022; Du et al., 2023a; Zhou et al., 2021), we propose mixture-of-head attention that leverages a mixture of both temporal attentions and correlated attentions. Mixture-of-head attention modifies multi-head attention (Equation 5) to also incorporate CAB as follows: \[ \text{MIXTURE-OF-HEAD-ATTENTION}(X) = \text{CONCAT}(\text{head}_1, \text{head}_2, ..., \text{head}_h) W^O \] where \( \text{head}_i = \begin{cases} \text{TEMPORAL-ATTENTION}(XW^Q_i, XW^K_i, XW^V_i), & \text{if } i \leq m \\ \text{CORRELATED-ATTENTION}(XW^Q_i, XW^K_i, XW^V_i), & \text{otherwise} \end{cases} \) where \( m \) is a threshold hyperparameter that controls the split between temporal attention and correlated attention. This uncomplicated modification to the base architecture of multi-head attention allows CAB to be flexibly plugged into a wide range of existing and future Transformers. ## 4 Experiments As CAB is a plug-in attention for encoder-only Transformer architectures, we extensively experiment on three mainstream MTS non-predictive tasks including imputation, anomaly detection and classification on real-world datasets. Ablation studies are provided in Appendix B. While focusing on non-predictive tasks, we provide preliminary results on MTS long-term forecasting in Appendix C. Run-time analysis is presented in Appendix D. ### Table 1: Dataset Summary | MTS Analysis Tasks | Benchmarking Datasets | Metrics | Sequence Length | |---------------------|------------------------|---------|-----------------| | Imputation | ETTm1, ETTm2, ETTh1, ETTh2, Electricity, Weather | MSE, MAE | 96 | | Anomaly Detection | SMD, MSL, SMAP, SWaT, PSM | Precision, Recall, F1-score (%) | 100 | | Classification | UEA (10 subsets) | Accuracy (%) | 29-1751 | **Experiment Benchmarks.** Following (Zhou et al., 2021; Wu et al., 2023; Zerveas et al., 2021), we extensively benchmark over the following real-world datasets: ETTh1 and ETTh2 (Electricity... Transformer Temperature-hourly) (Zhou et al., 2021), ETTm1 and ETTm2 (Electricity Transformer Temperature-minutely) (Zhou et al., 2021), Electricity (Trindade, 2015), Weather (Wetterstation), SMD (Su et al., 2019), MSL (Hundman et al., 2018a), SMAP (Hundman et al., 2018a), SWaT (Mathur & Tippennauer, 2016), PSM (Abdulaal et al., 2021) and UEA Time Series Classification Archive (Bagnall et al., 2018). A summary of the datasets for benchmark is given in Table 1. **Baselines.** We compare with TimesNet (Wu et al., 2023), the current state-of-the-art deep learning model on these three tasks (though not being Transformer-based), DLinear (Zeng et al., 2022), and the prevalent Transformer-based models including vanilla Transformer (Vaswani et al., 2017), Nonstationary Transformer (Liu et al., 2022), which has been shown to consistently achieve competitive results on a variety of tasks, FEDformer (Zhou et al., 2022), and Autoformer (Wu et al., 2022b). In fact, Nonstationary Transformer and FEDformer are the state-of-the-art Transformer-models for respectively imputation and anomaly detection in the recent benchmarks (Wu et al., 2023). For classification, we also consider Flowformer (Wu et al., 2022a), the state-of-the-art Transformer-model. **Our Models.** We integrate CAB (through the mixture-of-head attention) into two representative models: Transformer (Vaswani et al., 2017) and Nonstationary Transformer (Liu et al., 2022). ### 4.1 IMPUTATION **Setup.** Due to uncertainties of natural processes and malfunction of sensors, missing data is common in MTS, thereby hindering direct adoption of off-the-shelf models. MTS imputation has thus gathered much research interest (López et al., 2021). To exemplify real-world scenario commonly facing data missing problem, we consider six datasets from electricity and weather domain for benchmark: ETTh1 and ETTh2 (ETT-hourly) (Zhou et al., 2021), ETTm1 and ETTm2 (ETT-minutey) (Zhou et al., 2021), Electricity (Trindade, 2015) and Weather (Wetterstation). Each dataset is split into three sets of training set, validation set, and test set respectively with ratio 60%, 20% and 20%. Time-series data is generated by selecting every 96 consecutive steps as a sample. To test the models under different missing data rate, we randomly mask the time points with the ratio of {12.5%, 25%, 37.5%, 50%}. We adopt the mean square error (MSE) and mean absolute error (MAE) as the metrics. **Results.** The results are depicted in Table 2. Nonstationary+CAB and Transformer+CAB improve over Nonstationary and Transformer in respectively five and four datasets out of the total of six datasets. Nonstationary+CAB achieves state-of-the-art results surpassing TimesNet on five datasets. Table 2: Imputation task over six datasets. The missing data rate is {12.5%, 25%, 37.5%, 50%} and series length is 96. We highlight the best results and the second best results. | Dataset | Mask Ratio | TimesNet (Wu et al., 2023) | Nonstationary (Liu et al., 2022) | Nonstationary+CAB (Ours) | Transformer (Vaswani et al., 2017) | Transformer+CAB (Ours) | FEDformer (Zhou et al., 2022) | DLinear (Zeng et al., 2022) | Autoformer (Wu et al., 2022b) | |---------|------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------| | ETTh1 | 12.5 % | 0.019 | 0.092 | 0.026 | 0.107 | 0.018 | 0.087 | 0.023 | 0.105 | 0.022 | 0.104 | 0.035 | 0.135 | 0.038 | 0.162 | 0.034 | 0.124 | | ETTh1 | 25 % | 0.029 | 0.111 | 0.039 | 0.131 | 0.030 | 0.112 | 0.037 | 0.135 | 0.039 | 0.140 | 0.040 | 0.139 | 0.191 | 0.103 | 0.219 | 0.057 | 0.161 | | ETTh1 | 37.5 % | 0.036 | 0.124 | 0.047 | 0.145 | 0.037 | 0.125 | 0.045 | 0.148 | 0.050 | 0.157 | 0.049 | 0.154 | 0.218 | 0.132 | 0.248 | 0.067 | 0.174 | | ETTh1 | 50 % | 0.042 | 0.130 | 0.050 | 0.148 | 0.041 | 0.134 | 0.048 | 0.147 | 0.052 | 0.160 | 0.054 | 0.159 | 0.237 | 0.140 | 0.267 | 0.079 | 0.180 | | ETTh2 | 12.5 % | 0.018 | 0.080 | 0.018 | 0.080 | 0.016 | 0.076 | 0.125 | 0.264 | 0.130 | 0.271 | 0.096 | 0.159 | 0.162 | 0.160 | 0.032 | 0.092 | | ETTh2 | 25 % | 0.020 | 0.085 | 0.024 | 0.096 | 0.018 | 0.082 | 0.195 | 0.323 | 0.152 | 0.288 | 0.080 | 0.195 | 0.085 | 0.196 | 0.026 | 0.10 | | ETTh2 | 37.5 % | 0.025 | 0.090 | 0.028 | 0.102 | 0.025 | 0.090 | 0.225 | 0.378 | 0.174 | 0.340 | 0.124 | 0.258 | 0.131 | 0.247 | 0.035 | 0.119 | | ETTh2 | 50 % | 0.026 | 0.098 | 0.030 | 0.108 | 0.027 | 0.099 | 0.257 | 0.378 | 0.211 | 0.340 | 0.156 | 0.276 | 0.131 | 0.247 | 0.035 | 0.119 | | Average | | 0.022 | 0.088 | 0.026 | 0.099 | 0.021 | 0.087 | 0.199 | 0.327 | 0.170 | 0.303 | 0.101 | 0.215 | 0.096 | 0.208 | 0.029 | 0.105 | | ETTh1 | 12.5 % | 0.040 | 0.130 | 0.042 | 0.133 | 0.039 | 0.129 | 0.205 | 0.329 | 0.212 | 0.354 | 0.095 | 0.212 | 0.100 | 0.216 | 0.044 | 0.158 | | ETTh1 | 25 % | 0.052 | 0.151 | 0.056 | 0.158 | 0.051 | 0.150 | 0.285 | 0.392 | 0.265 | 0.378 | 0.187 | 0.341 | 0.158 | 0.276 | 0.060 | 0.163 | | ETTh1 | 37.5 % | 0.060 | 0.162 | 0.065 | 0.170 | 0.059 | 0.160 | 0.327 | 0.418 | 0.319 | 0.415 | 0.232 | 0.341 | 0.183 | 0.299 | 0.068 | 0.173 | | ETTh1 | 50 % | 0.063 | 0.169 | 0.068 | 0.174 | 0.062 | 0.168 | 0.352 | 0.436 | 0.332 | 0.430 | 0.252 | 0.349 | 0.192 | 0.326 | 0.071 | 0.156 | | ETTh2 | 12.5 % | 0.085 | 0.202 | 0.093 | 0.210 | 0.081 | 0.198 | 0.348 | 0.476 | 0.343 | 0.469 | 0.107 | 0.407 | 0.237 | 0.402 | 0.214 | 0.089 | 0.210 | | ETTh2 | 25 % | 0.089 | 0.206 | 0.097 | 0.214 | 0.087 | 0.204 | 0.361 | 0.295 | 0.365 | 0.283 | 0.251 | 0.118 | 0.247 | 0.096 | 0.220 | 0.118 | | ETTh2 | 37.5 % | 0.100 | 0.221 | 0.108 | 0.228 | 0.098 | 0.215 | 0.377 | 0.296 | 0.373 | 0.302 | 0.254 | 0.125 | 0.254 | 0.105 | 0.229 | 0.105 | | ETTh2 | 50 % | 0.102 | 0.228 | 0.110 | 0.231 | 0.107 | 0.225 | 0.395 | 0.308 | 0.393 | 0.316 | 0.264 | 0.135 | 0.264 | 0.113 | 0.239 | 0.113 | | Average | | 0.092 | 0.210 | 0.100 | 0.218 | 0.098 | 0.207 | 0.364 | 0.287 | 0.362 | 0.284 | 0.250 | 0.132 | 0.260 | 0.101 | 0.225 | 0.101 | | Weather | 12.5 % | 0.025 | 0.045 | 0.027 | 0.051 | 0.026 | 0.050 | 0.034 | 0.090 | 0.033 | 0.082 | 0.003 | 0.107 | 0.039 | 0.084 | 0.026 | 0.047 | | Weather | 25 % | 0.031 | 0.057 | 0.033 | 0.062 | 0.034 | 0.064 | 0.038 | 0.091 | 0.038 | 0.089 | 0.010 | 0.107 | 0.042 | 0.077 | 0.032 | 0.060 | | Weather | 37.5 % | 0.032 | 0.062 | 0.035 | 0.067 | 0.034 | 0.066 | 0.042 | 0.107 | 0.045 | 0.104 | 0.015 | 0.125 | 0.050 | 0.085 | 0.036 | 0.067 | | Weather | 50 % | 0.030 | 0.054 | 0.032 | 0.061 | 0.030 | 0.058 | 0.031 | 0.091 | 0.032 | 0.089 | 0.010 | 0.108 | 0.042 | 0.072 | 0.031 | 0.057 | While results of TimesNet on forecasting and imputation are reproducible, we cannot recover its state-of-the-art results, from their released code, on anomaly detection and classification. We report here the results on such two tasks obtained from their released implementation and note that the relative ranking of baselines remains the same as in TimesNet benchmark (Wu et al., 2023), i.e. TimesNet is the best among the previous baselines. 4.2 Anomaly Detection **Setup.** Anomalies are inherent in large-scale data and can be caused by noisy measurements. We consider the five datasets vastly used for anomaly-detection benchmarks: SMD (Su et al., 2019), MSL (Hundman et al., 2018a), SMAP (Hundman et al., 2018a), SWaT (Mathur & Tippenhauer, 2016) and PSM (Abdula et al., 2021). We then follow (Xu et al., 2022; Shen et al., 2020) for pre-processing data that generates a set of sub-series via non-overlapped sliding window, and set the series length to 100. The original datasets SMD, MSL, SMAP, SWaT and PSM are split into collections of training set, validation set and test set following (Xu et al., 2022 Appendix K). We adopt Precision, Recall and F1-score (all in %) as the metrics, where higher values correspond to better performance. **Results.** From Table 3, our model Nonstationary+CAB achieves the best average F1-score, surpassing TimesNet. Furthermore, CAB consistently and significantly improves the precision and F1-score, which is the more favorable metrics for balancing precision and recall, of the base Transformers. Table 3: Anomaly detection task over five datasets. We report the Precision (P), Recall (R) and F1-score (F1)-the harmonic mean of precision and recall, and highlight the best results and the second best results. | Dataset | TimesNet | Transformer+CAB (Ours) | Nonstationary+CAB (Ours) | Transformer+CAB (Baseline) | |---------|----------|------------------------|--------------------------|---------------------------| | P | R | F1 | P | R | F1 | | SMD | 87.88 | 81.54 | 84.59 | 76.13 | 79.56 | 85.35 | 85.11 | | MSL | 89.55 | 75.29 | 83.80 | 71.57 | 78.68 | 80.70 | 78.06 | | SMAP | 90.68 | 88.12 | 89.37 | 81.12 | 87.57 | 88.03 | 87.57 | | SWaT | 90.95 | 95.42 | 93.13 | 68.84 | 96.53 | 90.57 | 94.17 | | PSM | 96.26 | 96.26 | 96.26 | 79.26 | 82.56 | 86.96 | 90.76 | | Average | 91.19 | 87.57 | 89.32 | 82.74 | 86.88 | 89.59 | 88.29 | 4.3 Classification **Setup.** We select ten datasets from the UEA Time Series Classification Archive (Bagnall et al., 2018) following (Wu et al., 2023). These cover health care, audio recognition, transportation and other practical applications. The datasets are pre-processed similarly to (Zerveas et al., 2021 Appendix A) that assigns different series length for different subsets. We adopt the accuracy (%) as the metrics. **Results.** As shown in Table 4, our model Transformer+CAB achieves the best overall result surpassing TimesNet. Moreover, CAB demonstrates consistent performance improvement when combined with either Transformer or Nonstationary Transformer. Table 4: Classification task task over 10 datasets from UEA. The accuracies (%) are reported. We highlight the best results and the second best results. | Dataset | TimesNet | Transformer+CAB (Ours) | Nonstationary+CAB (Ours) | Transformer+CAB (Baseline) | |-----------------|----------|------------------------|--------------------------|---------------------------| | ECG200 | 28.14 | 26.96 | 27.94 | 24.39 | 25.10 | 28.90 | 27.94 | | FaceDetection | 67.31 | 67.93 | 71.11 | 68.70 | 69.40 | 68.55 | 57.25 | | HandIntrusion | 29.08 | 29.53 | 29.96 | 29.41 | 30.12 | 18.87 | 19.12 | | Heartbeat | 74.15 | 75.12 | 75.12 | 72.20 | 72.20 | 75.12 | 70.73 | | JapaneseVowels | 97.57 | 97.03 | 97.54 | 96.22 | 95.68 | 96.76 | 94.86 | | PEMS01 | 89.02 | 90.05 | 88.71 | 86.86 | 75.14 | 86.71 | 80.75 | | SCP1 | 91.13 | 91.13 | 91.47 | 83.28 | 82.94 | 57.00 | 88.05 | | SCP2 | 82.78 | 53.89 | 56.11 | 50.00 | 55.55 | 49.84 | 37.85 | | SpokenArabic | 98.68 | 98.45 | 99.05 | 98.82 | 98.91 | 98.32 | 96.54 | | UWaveGesture | 86.48 | 86.25 | 85.94 | 81.56 | 85.94 | 44.98 | 81.25 | | Average | 71.49 | 70.16 | 72.48 | 60.80 | 60.10 | 62.33 | 57.78 | 5 Conclusion and Future Work In this paper, we proposed the novel correlated attention block (CAB) that can efficiently learn the cross-correlation between variates of MTS data, and be seamlessly plugged into existing Transformer-based models for performance improvement. The modularity of CAB, which could be flexibly plugged into follow-up Transformer-architectures for efficiency gain, and the methodology behind our design of CAB, which is the first attention mechanism that aims to capture lagged cross-correlation in the literature, will greatly benefit future work on time series Transformers. Extensive experiments on imputation, anomaly detection and classification demonstrate the benefits of CAB for improving base Transformers, and result in state-of-the-art models for respective tasks. For future work, we will extend the design of CAB to be integrated into encoder-decoder Transformer-architectures for improving performance in MTS predictive tasks. REFERENCES Ahmed Abdulaal, Zhuanghua Liu, and Tomer Lancewicki. Practical approach to asynchronous multivariate time series anomaly detection and localization. In *Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining*, KDD ’21, pp. 2485–2494, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450383325. doi: 10.1145/3447548.3467174. URL [https://doi.org/10.1145/3447548.3467174](https://doi.org/10.1145/3447548.3467174). Anthony Bagnall, Hoang Anh Dau, Jason Lines, Michael Flynn, James Large, Aaron Bostrom, Paul Southam, and Eamonn Keogh. The uea multivariate time series classification archive, 2018, 2018. Ane Blázquez-García, Angel Conde, Use Mori, and Jose A. Lozano. A review on outlier/anomaly detection in time series data, 2020. Defu Cao, Yujing Wang, Juanyong Duan, Ce Zhang, Xia Zhu, Conguri Huang, Yunhai Tong, Bixiong Xu, Jing Bai, Jie Tong, and Qi Zhang. Spectral temporal graph neural network for multivariate time-series forecasting. In *Proceedings of the 34th International Conference on Neural Information Processing Systems*, NIPS’20, Red Hook, NY, USA, 2020. Curran Associates Inc. ISBN 9781713829546. Cristian Challu, Kin G. Olivares, Boris N. Oreshkin, Federico Garza, Max Mergenthaler-Canseco, and Artur Dubrawski. N-hits: Neural hierarchical interpolation for time series forecasting, 2022. Thevaa Chandereng and Anthony Gitter. Lag penalized weighted correlation for time series clustering. *BMC Bioinformatics*, 21(1):21, August 2020. Chris Chatfield. *The analysis of time series: an introduction*. CRC Press, Florida, US, 6th edition, 2004. Razvan-Gabriel Cirstea, Darius-Valer Micu, Gabriel-Marcel Muresan, Chenjuan Guo, and Bin Yang. Correlated time series forecasting using deep neural networks: A summary of results, 2018. Razvan-Gabriel Cirstea, Tung Kieu, Chenjuan Guo, Bin Yang, and Sinno Jialin Pan. Enhancenet: Plugin neural networks for enhancing correlated time series forecasting. In *2021 IEEE 37th International Conference on Data Engineering (ICDE)*, pp. 1739–1750, 2021. doi: 10.1109/ICDE51399.2021.00153. Javier E. Contreras-Reyes and Byron J. Idrovo-Aguirre. Backcasting and forecasting time series using detrended cross-correlation analysis. *Physica A: Statistical Mechanics and its Applications*, 560: 125109, 2020. ISSN 0378-4371. doi: https://doi.org/10.1016/j.physa.2020.125109. URL [https://www.sciencedirect.com/science/article/pii/S0378437120305768](https://www.sciencedirect.com/science/article/pii/S0378437120305768). Wenjie Du, David Côté, and Yan Liu. SAITS: Self-attention-based imputation for time series. *Expert Systems with Applications*, 219:119619, jun 2023a. doi: 10.1016/j.eswa.2023.119619. URL [https://doi.org/10.1016%2Fj.eswa.2023.119619](https://doi.org/10.1016%2Fj.eswa.2023.119619). Wenjie Du, David CΩ, and Yan Liu. Saits: Self-attention-based imputation for time series. *Expert Systems with Applications*, 219:119619, 2023b. ISSN 0957-4174. doi: https://doi.org/10.1016/j.eswa.2023.119619. URL [https://www.sciencedirect.com/science/article/pii/S0957417423001203](https://www.sciencedirect.com/science/article/pii/S0957417423001203). Alaaeldin El-Nouby, Hugo Touvron, Mathilde Caron, Piotr Bojanowski, Matthijs Douze, Armand Joulin, Ivan Laptev, Natalia Neverova, Gabriel Synnaeve, Jakob Verbeek, and Hervé Jegou. Xcit: Cross-covariance image transformers, 2021. Philippe Esling and Carlos Agon. Time-series data mining. *ACM Comput. Surv.*, 45(1), dec 2012. ISSN 0360-0300. doi: 10.1145/2379776.2379788. URL [https://doi.org/10.1145/2379776.2379788](https://doi.org/10.1145/2379776.2379788). Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, and Pierre-Alain Muller. Deep learning for time series classification: a review. *Data Mining and Knowledge Discovery*, 33(4):917–963, mar 2019. doi: 10.1007/s10618-019-00619-1. URL [https://doi.org/10.1007%2Fs10618-019-00619-1](https://doi.org/10.1007%2Fs10618-019-00619-1).
13D1zn0mpd
In the proposed method, several points of clarification regarding the comparison with LoRA emerge. Firstly, it would be beneficial to understand what distinguishes the proposed method from LoRA. While the primary technique appears to focus on reducing the rank of $A_t B_t^{\top}$. Secondly, when referencing Table 4, one can observe that PERU-LoRA (16) has 232M parameters, which is only a marginal 3% reduction compared to the Single-Task model, yet its performance seems to lag behind. It raises the question of whether this slight reduction in parameters warrants the observed decrease in performance. Expounding on these aspects would provide a deeper understanding of the method's value proposition and potential areas for improvement.
EFFECTIVE AND PARAMETER-EFFICIENT REUSING FINE-TUNED MODELS Anonymous authors Paper under double-blind review ABSTRACT Many pre-trained large-scale models provided online have become highly effective in transferring to downstream tasks. At the same time, various task-specific models fine-tuned on these pre-trained models are available online for public use. In practice, as collecting task-specific data is labor-intensive and fine-tuning the large pre-trained models is computationally expensive, one can reuse task-specific fine-tuned models to deal with downstream tasks. However, using a model per task causes a heavy burden on storage and serving. Recently, many training-free and parameter-efficient methods have been proposed for reusing multiple fine-tuned task-specific models into a single multi-task model. However, these methods exhibit a large accuracy gap compared with using a fine-tuned model per task. In this paper, we propose Parameter-Efficient methods for ReUsing (PERU) fine-tuned models. For reusing Fully Fine-Tuned (FFT) models, we propose PERU-FFT by injecting sparse task vectors into a merged model by magnitude pruning. For reusing LoRA fine-tuned models, we propose PERU-LoRA use a lower-rank matrix to approximate the LoRA matrix by singular value decomposition. Both PERU-FFT and PERU-LoRA are training-free. Extensive experiments conducted on computer vision and natural language process tasks demonstrate the effectiveness and parameter-efficiency of the proposed methods. The proposed PERU-FFT and PERU-LoRA outperform existing merging models method by a large margin and achieve comparable performance to using a fine-tuned model per task. PERU-FFT is general and can be integrated into any existing merging models methods to boost performance. 1 INTRODUCTION In recent years, large-scale models pre-trained on massive data have proven effective in transferring to downstream tasks (Chen et al., 2022; Min et al., 2022; Yuan et al., 2023; Ruiz et al., 2023). Various pre-trained models are available on Hugging Face (Wolf et al., 2020), e.g., ResNet (He et al., 2016), ViT (Dosovitskiy et al., 2021), CLIP (Radford et al., 2021), and diffusion models (Ho et al., 2020; Rombach et al., 2022) for computer vision; T5 (Raffel et al., 2020), GPT-2 (Radford et al., 2019), and LLaMA (Touvron et al., 2023a;b) models for natural language processing. Practitioners specialize a pre-trained model to a task-specific model by either fully or parameter-efficient fine-tuning (Houlsby et al., 2019; Hu et al., 2022; Lester et al., 2021; Jiang et al., 2023; Yu et al., 2023) on the task data, e.g., a CLIP-L/14 model (Radford et al., 2021) fine-tuned on the SUN397 benchmark (Xiao et al., 2016) can be used for scene recognition tasks. Many fine-tuned models are published online for public use. By 2023, more than 120,000 models are available on Hugging Face Hub. For a downstream task, as collecting task-specific data is labor-intensive and fine-tuning the large pre-trained models is computationally expensive, one can download and reuse the fine-tuned models from Hugging Face. In real-world applications, we usually need to deal with a number of tasks simultaneously (Dong et al., 2015; Siam et al., 2018; Raffel et al., 2020). Using a task-specific fine-tuned model for each task is effective but costly in storing and serving. Fine-tuning the pre-trained model on all task data can address this issue but requires expensive re-training and the availability of all task data, which is infeasible. Recently, many training-free and parameter-efficient methods have been proposed for merging multiple fine-tuned task-specific models into a single multi-task model. For example, Task-Arithmetic (Ilharco et al., 2023) performs a uniformly merging by adding the average of all task vectors (i.e., the difference between the task model and the pre-trained model) to the pre-trained model, while Fisher-Merging (Matena & Raffel, 2022) improves uniform merging to weighted merging, where the weight for each task model is determined by Fisher information matrix estimated on the validation loss. RegMean (Jin et al., 2023) further proposes to merge linear layers by solving a local linear regression problem. TIES-Merging (Yadav et al., 2023) trims low-magnitude elements in the task vectors and attempts to resolve sign disagreements across task models before merging models. For complex tasks, merging task models into a shared model may cause parameter inference (Yadav et al., 2023). Figure 1 shows the average testing accuracy on eight tasks when reusing fine-tuned ViT models (Dosovitskiy et al., 2021; Radford et al., 2021), demonstrating a large gap between the accuracy of existing merging methods (Task-Arithmetic, Fisher-Merging, RegMean, TIES-Merging) and using single-task fine-tuned models (denoted Single-Task). ![Figure 1](image) (a) ViT-B/32. (b) ViT-B/16. (c) ViT-L/14. Figure 1: Average testing accuracy on eight tasks by reusing fully fine-tuned models. Post-Pruning and PERU-FFT keeps top-10% values. In this paper, we propose PERU, a Parameter-Efficient method for ReUsing fine-tuned models. We first introduce post-tuning technique (Zhu & Gupta, 2017; Liu et al., 2018; Wang et al., 2020; Zhang et al., 2022; Xia et al., 2022) to extract a sparse task vector. This method is simple and effective (Figure 1, Post-Pruning keeps top-10% values). We further propose PERU-FFT to extract task-shared knowledge by merging task-specific models, and prune the difference between the task-specific model and the merged model to extract a sparse vector containing task-specific knowledge. As shown in Figure 1, With only top-10% values of task vectors, PERU-FFT achieves comparable performance with Single-Task and moreover, performs better than existing merging algorithms by a large margin. For LoRA Fine-tuned models, the sparsifying task vectors method is not suitable as pruning the LoRA matrices leads to worse performance while pruning their product cannot reduce the number of parameters compared with the LoRA matrices. To address this problem, we propose PERU-LoRA to approximate LoRA matrices by lower rank-q matrices by singular value decomposition. We only need to keep the top-q singular values and their corresponding singular vectors. Empirically, the approximate error decreases exponentially fast w.r.t. q, while the accuracy increases exponentially fast. Particularly, PERU-LoRA methods with q = 16 achieve comparable performance compared with Single-Task (Figure 2). Our contributions are summarized as follows: (i) We propose PERU-FFT for reusing fully fine-tuned models, where task vectors are computed from the merged model and the task-specific models. (ii) We propose PERU-LoRA for reusing LoRA fine-tuned models, where the lower-rank matrices are added to θ₀. (iii) Extensive experimental results on computer vision and natural language processing tasks, show that PERU-FFT and PERU-LoRA perform much better than existing merging methods. Furthermore, PERU-FFT and PERU-LoRA achieve comparable performance compared to Single-Task fine-tuned models, but are much more parameter-efficient. (iv) PERU-FFT is general and can be combined with any existing merging algorithms (e.g., Task-Arithmetic (Ilharco et al., 2023), Fisher-Merging (Matena & Raffel, 2022), RegMean (Jin et al., 2023), TIES-Merging (Yadav et al., 2023)) to boost performance. 2 RELATED WORKS We consider a neural network \( f(x; \theta) \) with input \( x \) and parameters \( \theta \in \mathbb{R}^d \). Let \( \theta_0 \) be a pre-trained model provided on torchvision (Marcel & Rodriguez, 2010), HuggingFace (Wolf et al., 2020), or timm (Wightman, 2019), e.g., ViT-B/32 (Dosovitskiy et al., 2021). Besides, many task-specific models fine-tuned from $\theta_0$ are also publicly available online. Given $T$ tasks, where each task has a fine-tuned model. We aim to reuse existing fine-tuned models $\{\theta_t : t = 1, \ldots, T\}$ to construct a model that can be used for solving $T$ tasks simultaneously. Different from multi-task learning (Kendall et al., 2018; Liu et al., 2021; 2019; Ye et al., 2021; Lin et al., 2022; 2023), training data for all tasks are unavailable. Hence, we cannot learn a multi-task model by jointly re-training on data. Existing methods focus on merging all task-specific models into a model and expect the merged model to have promising performance on all tasks. For example, Task-Arithmetic (Ilharco et al., 2023) merges all model weights as $\theta^* = \theta_0 + \lambda \sum_{t=1}^{T} (\theta_t - \theta_0)$, where $\lambda$ is a hyperparameter chosen on a small validation set, and $v_t \equiv \theta_t - \theta_0$ is a task vector representing the element-wise difference between $\theta_t$ and $\theta_0$. When $\lambda = \frac{1}{T}$, $\theta^*$ becomes uniform averaging all model weights, i.e., the Model soups method in Wortsman et al. (2022a). Wortsman et al. (2022b) ensemble the pre-trained model $\theta_0$ and fine-tuned model $\theta_t$ to improve the robustness of $\theta_t$. Fisher-Merging (Matena & Raffel, 2022) improves uniform merging to weighted merging, where the weights are determined by the Fisher information matrix estimated on the validation set. RegMean (Jin et al., 2023) proposes to merge linear layers by solving a local linear regression problem while merging other layers by uniform averaging. TIES-Merging (Yadav et al., 2023) trims low-magnitude elements in the task vector $v_t$ and resolves sign disagreements across task models before performing merging models. Ortiz-Jimenez et al. (2023) study how to fine-tune $\theta_0$ on $D_t$ such that Task-Arithmetic can perform well. Pruning, which aims to reduce the model size and maintain the model performance, is a popular technique for compressing and sparsifying neural networks. Many pruning methods (Zhu & Gupta, 2017; Liu et al., 2018; Wang et al., 2020; Zhang et al., 2022; Xia et al., 2022) sparse model weights in an optimization or evolutionary manner and need enough training data, gradient information, and even re-training, which is unsuitable for the reusing model problem. For example, Zhang et al. (2022) formulate pruning as a bi-level optimization problem and iteratively optimize to find a binary mask to select model weights. Magnitude-based pruning (Han et al., 2015; Narang et al., 2016; Zhu & Gupta, 2017), which selects weights for a trained model based on the weight magnitudes, is data-free and training-free pruning. ### 3 Parameter-Efficient Reusing Fine-tuned Models #### 3.1 Reusing Fully Fine-Tuned Models For reusing task-specific fine-tuned models, existing methods (e.g., Task-Arithmetic (Ilharco et al., 2023), Fisher-Merging (Matena & Raffel, 2022), RegMean (Jin et al., 2023), TIES-Merging (Yadav et al., 2023)) focus on merging all task models into a shared model without any task-specific parameters. As can be seen from Figure 1, their accuracies (averaged over eight tasks) are much lower than that of Single-Task. To deal with this issue, we propose to inject sparse task-specific vectors into the merged model. In reusing fine-tuned models, training-based pruning methods (Zhu & Gupta, 2017; Liu et al., 2018; Wang et al., 2020; Zhang et al., 2022; Xia et al., 2022) based on weights importance are infeasible for sparsifying the task vectors, since the training data are unavailable. We introduce post-pruning (Han et al., 2015; Narang et al., 2016; Zhu & Gupta, 2017) extracts sparse task-specific vectors from task vectors based on their magnitudes. Compared with training-based pruning, Post-Pruning is training-free. For each task, we keep the top-$m\%$ (e.g., 1%, 10%) values of the task vectors and prune the rests: $$\hat{v}_t(m) = \text{keep top-}m\%\text{ of } v_t \text{ based on magnitude.}$$ In inference, $\theta_0 + \hat{v}_t(m)$ is used as a pruned task model. The procedure of Post-Pruning is shown in Algorithm 1. As $\theta_0 + \hat{v}_t(m)$ only depends on the $t$th task model, it does not use shared knowledge from other tasks. We propose to merge task-specific models before pruning. Specifically, let $u_t \equiv \theta_t - \theta^*$, $t = 1, \ldots, T$, where $\theta^*$ is a merged model. We prune $u_t$ to $\hat{u}_t(m)$ by keeping the top-$m\%$ values of $u_t$ as in (1). In inference, $\theta^* + \hat{u}_t(m)$ is used as a pruned task model. As the method for obtaining $\theta^*$ is flexible, any merging algorithms (e.g., Task-Arithmetic, Fisher-Merging, RegMean, TIES- Merging) can be adopted. The procedure, called PERU-FFT, is shown in Algorithm 1. Compared with Post-Pruning, PERU-FFT has the same number of parameters for a specific ratio $m\%$. **Algorithm 1 Post-Pruning (resp. PERU-FFT).** Require: $m\%$, $\theta_0; \theta_1, \ldots, \theta_T$; a merging algorithm $A_{\text{merging}}$; 1: if PERU-FFT: obtain $\theta^*$ by $A_{\text{merging}}$; 2: for $t = 1, \ldots, T$ do 3: $v_t = \theta_t - \theta_0$ (resp. $u_t = \theta_t - \theta^*$); 4: obtain $\hat{v}_t(m)$ (resp. $\hat{u}_t(m)$) by keeping top-$m\%$ values; 5: evaluate $\theta_0 + \hat{v}_t(m)$ (resp. $\theta^* + \hat{u}_t(m)$) on task $t$’s testing set; 6: end for ### 3.2 Reusing LoRA Fine-Tuned Models As pre-trained models are usually huge (e.g., ViT-L/14 (Dosovitskiy et al., 2021) has 343M parameters, T5-base (Raffel et al., 2020) has 220M parameters, LLaMA-2 (Touvron et al., 2023b) series have 7B, 13B, 70B parameters), LoRA Fine-Tuning (Hu et al., 2022) is a parameter-efficient method to obtain task-specific models. The fine-tuned task model $\theta_t \in \mathbb{R}^{d_{\text{out}} \times d_{\text{in}}}$ is decomposed as $$\theta_t = \theta_0 + A_t B_t^\top,$$ where $A_t \in \mathbb{R}^{d_{\text{out}} \times r}$, $B_t \in \mathbb{R}^{d_{\text{in}} \times r}$, and $r \ll \{d_{\text{in}}, d_{\text{out}}\}$. The number of parameters required in LoRA fine-tuning is $r \times (d_{\text{out}} + d_{\text{in}})$, much smaller than that fully fine-tuning ($d_{\text{out}} \times d_{\text{in}}$) as $r$ is usually small, e.g., $r = 128$. Due to its efficiency, many task-specific LoRA fine-tuned models are available online for public use. Existing methods for merging fully fine-tuned models can be applied directly to merging LoRA fine-tuned models $\{\theta_t : t = 1, \ldots, T\}$. As shown in Figure 2, existing methods perform much worse than the Single-Task (LoRA fine-tuned) method. Hence, using a merged model for all tasks without task-specific parameters is undesirable. Different from reusing fully fine-tuned models, sparsifying $A_t B_t^\top$ is not parameter-efficient compared with storing $A_t$ and $B_t$ separately. In the following, we use singular value decomposition (SVD) to extract a small fraction of parameters from the task-specific LoRA matrix, which is then injected into the shared model. ![Figure 2](image) (a) ViT-B/32. (b) ViT-B/16. (c) ViT-L/14. Figure 2: Testing accuracy (averaged over eight tasks) by reusing LoRA fine-tuned models (PERU-LoRA with $q = 16$). We propose to approximate $A_t B_t^\top$ by a lower-rank matrix to save more parameters. Specifically, we first perform SVD for $A_t B_t^\top = U_t \Sigma_t V_t^\top$, where $U_t \in \mathbb{R}^{d_{\text{out}} \times r}$, $V_t \in \mathbb{R}^{d_{\text{in}} \times r}$, and $\Sigma_t \in \mathbb{R}^{r \times r}$ is a diagonal matrix with diagonal entries sorted from high to low. Let $U_t(q) \in \mathbb{R}^{d_{\text{out}} \times q}$ be the submatrix of first $q$ columns of $U_t$, $V_t(q) \in \mathbb{R}^{d_{\text{in}} \times q}$ be the submatrix of first $q$ rows and columns of $\Sigma_t$ (corresponding to the $q$ largest singular values). The LoRA matrix $A_t B_t^\top$ is then approximated as $U_t(q) \Sigma_t(q) V_t(q)^\top$, where the number of parameters is reduced from $r \times (d_{\text{out}} + d_{\text{in}})$ to $q \times (d_{\text{out}} + d_{\text{in}} + 1)$. $q$ can be much smaller than $r$, e.g., $q = 16$ compared with $r = 128$, saving $8\times$ additional parameters in LoRA matrices. In inference, $\theta_0 + U_t(q) \Sigma_t(q) V_t(q)^\top$ is used as the task model. The procedure, called PERU-LoRA, is shown in Algorithm 2. 1Experimental setup is in Section 4.1. Discussion. Unlike reusing fully fine-tuned models, merging models before extracting a task-specific lower-rank matrix is infeasible to reuse LoRA fine-tuned models. Specifically, let $\theta^*$ be a merged model, then $\theta_t - \theta^* = \theta_0 + A_t B_t^\top - \theta^*$ is not always a rank-$r$ matrix. For example, when using Task-Arithmetic (Ilharco et al., 2023) as a merging algorithm, $\theta_t - \theta^* = \sum_{t=1}^{T} A_t B_t^\top$, whose rank can be $qT$. Algorithm 2 PERU-LoRA. Require: $\theta_0$; LoRA matrices $\{(A_t, B_t)\}_{t=1}^{T}$; rank $q$; 1: for $t = 1, \ldots, T$ do 2: compute $U_t(q), V_t(q), \Sigma_t(q)$ from $A_t B_t^\top$ by SVD; 3: evaluate $\theta_0 + U_t(q) \Sigma_t(q) V_t(q)^\top$ on task $t$’s testing set; 4: end for 4 EXPERIMENTS 4.1 EXPERIMENTS ON COMPUTER VISION TASKS Datasets and models. Experiments are conducted on eight image classification tasks: MNIST (denoted MNI) (LeCun et al., 2010), GTSRB (denoted GTS) (Stallkamp et al., 2011), SVHN (denoted SVH) (Netzer et al., 2011), RESISC45 (denoted RES) (Cheng et al., 2017), SUN397 (denoted SUN) (Xiao et al., 2016), EuroSAT (denoted EUR) (Helber et al., 2019), DTD (Cimpoi et al., 2014), and Cars (denoted CAR) (Krause et al., 2013). Following Ilharco et al. (2023), we adopt three variants of the CLIP model (Radford et al., 2021) with ViT models (Dosovitskiy et al., 2021) including ViT-B/32, ViT-B/16, and ViT-L/14 as image encoders. For PERU-FFT, we use the Task-Arithmetic (Ilharco et al., 2023) as the merging algorithm $A_{\text{merging}}$. Baselines. We compare with (i) Pre-Trained Model $\theta_0$; (ii) Single-Task fully fine-tuned models (Single-Task); (iii) Multi-Task Learning (MTL) (Zhang & Yang, 2021) which requires all task data. Table 1: Testing accuracy on eight tasks reusing fully/LoRA fine-tuned models using ViT-B/32. | #params (M) | MNI | GTS | SVH | RES | SUN | EUR | DTD | CAR | Avg | |-------------|-----|-----|-----|-----|-----|-----|-----|-----|-----| | Pre-Trained | 113 | 48.25 | 32.56 | 31.61 | 60.65 | 63.18 | 45.11 | 43.99 | 59.74 | 48.14 | | Single-Task | 908 | 99.72 | 99.23 | 97.42 | 95.56 | 75.03 | 99.00 | 79.47 | 78.73 | 90.52 | | MTL | 113 | 99.45 | 98.91 | 95.80 | 93.90 | 72.85 | 98.22 | 77.87 | 74.44 | 88.93 | | Task-Arithmetic | 113 | 93.27 | 65.99 | 71.62 | 71.57 | 63.63 | 78.41 | 51.76 | 61.50 | 69.72 | | Fisher-Merging | 113 | 80.71 | 75.15 | 74.08 | 70.24 | 65.25 | 81.48 | 49.84 | 62.90 | 69.96 | | RegMean | 113 | 92.55 | 65.12 | 75.48 | 75.56 | 65.72 | 84.33 | 56.01 | 64.54 | 72.41 | | TIES-Merging | 113 | 97.79 | 75.30 | 84.10 | 70.71 | 59.24 | 75.89 | 53.51 | 58.72 | 71.91 | | Post-Pruning (1%) | 123 | 58.41 | 40.61 | 39.38 | 67.08 | 66.63 | 56.26 | 48.83 | 63.95 | 55.14 | | Post-Pruning (5%) | 159 | 95.82 | 78.61 | 74.35 | 83.67 | 71.60 | 85.81 | 62.39 | 72.73 | 78.12 | | Post-Pruning (10%) | 204 | 99.17 | 95.30 | 93.85 | 92.13 | 74.39 | 96.37 | 71.97 | 77.09 | 87.53 | | PERU-FFT (1%) | 123 | 96.17 | 76.33 | 79.27 | 78.03 | 66.88 | 84.89 | 58.03 | 65.99 | 75.70 | | PERU-FFT (5%) | 159 | 99.12 | 92.66 | 91.86 | 88.48 | 71.35 | 94.85 | 67.77 | 73.08 | 84.90 | | PERU-FFT (10%) | 204 | 99.49 | 97.57 | 95.92 | 93.00 | 73.52 | 97.63 | 72.98 | 76.92 | 88.38 | | Single-Task | 194 | 99.61 | 98.71 | 97.34 | 95.57 | 73.42 | 98.63 | 76.91 | 77.25 | 89.68 | | Task-Arithmetic | 113 | 86.90 | 51.44 | 66.50 | 68.16 | 62.32 | 76.19 | 48.62 | 56.85 | 64.62 | | Fisher-Merging | 113 | 86.71 | 53.85 | 62.44 | 71.19 | 65.16 | 72.67 | 50.37 | 62.88 | 65.66 | | RegMean | 113 | 94.45 | 60.10 | 81.11 | 74.57 | 65.10 | 88.15 | 53.72 | 63.97 | 72.65 | | TIES-Merging | 113 | 82.48 | 45.89 | 58.95 | 70.67 | 65.20 | 71.11 | 49.15 | 62.44 | 63.24 | | PERU-LoRA (4) | 116 | 99.16 | 92.04 | 93.98 | 86.48 | 68.61 | 95.37 | 65.37 | 62.74 | 82.97 | | PERU-LoRA (8) | 118 | 99.54 | 96.23 | 96.45 | 92.16 | 70.33 | 98.26 | 72.55 | 67.35 | 86.61 | | PERU-LoRA (16) | 123 | 99.62 | 97.99 | 97.08 | 94.56 | 72.29 | 98.37 | 76.44 | 71.31 | 88.46 | for training a model; and the state-of-the-art merging methods include (iv) Task-Arithmetic (Ilharco et al., 2023) merges model parameters by uniform averaging; (v) Fisher-Merging (Matena & Raffel, 2022) takes weighted averaging based on Fisher information matrix computed on the validation loss; (vi) RegMean (Jin et al., 2023) merges linear layers by solving a local linear regression problem on the validation data; (vii) TIES-Merging (Yadav et al., 2023) trims the task vectors and resolves the sign disagreements before aggregating parameters. **Results.** Tables 1, 2, and 3 shows the testing accuracy on eight data sets using ViT-B/32, ViT-B/16, and ViT-L/14, respectively. As can be seen, for reusing fully fine-tuned models, by keeping top-10% values, both PERU-FFT and Post-Pruning achieve comparable performance with Single-Task, but are more parameter-efficient (4.5× fewer parameters). PERU-FFT (with addition 1% parameters per task) consistently performs better than the existing merging models method by a large margin, demonstrating the effectiveness of injecting sparse task-specific vectors into the shared model. Compared with Post-Pruning, PERU-FFT achieves higher accuracy (averaged over eight tasks), showing that merging the task-specific models before pruning the task vectors is effective. PERU-FFT, which keeps top-1% values of task vectors, performs largely better than existing merging models. Table 2: Testing accuracy on eight tasks reusing fully/LoRA fine-tuned models using ViT-B/16. | #params (M) | MNI | GTS | SVH | RES | SUN | EUR | DTD | CAR | Avg | |-------------|-----|-----|-----|-----|-----|-----|-----|-----|-----| | Pre-Trained | 112 | 51.79 | 43.34 | 51.98 | 65.76 | 65.50 | 55.22 | 45.11 | 64.57 | 55.41 | | Single-Task | 894 | 99.72 | 99.15 | 97.86 | 96.57 | 78.71 | 99.33 | 82.29 | 87.20 | 92.60 | | Task-Arithmetic | 112 | 97.35 | 71.39 | 80.50 | 75.71 | 67.88 | 82.63 | 52.34 | 70.74 | 74.82 | | Fisher-Merging | 112 | 94.52 | 61.21 | 73.24 | 75.25 | 68.54 | 80.41 | 50.74 | 69.94 | 71.73 | | RegMean | 112 | 96.93 | 70.26 | 83.79 | 77.60 | 69.10 | 88.85 | 54.63 | 71.67 | 76.60 | | TIES-Merging | 112 | 98.75 | 74.43 | 88.84 | 78.48 | 66.21 | 85.93 | 57.13 | 73.15 | 77.86 | | Fully FT | | | | | | | | | | | Post-Pruning (1%) | 121 | 60.94 | 47.66 | 60.54 | 73.97 | 68.52 | 66.15 | 49.63 | 69.29 | 62.09 | | Post-Pruning (5%) | 157 | 96.06 | 77.36 | 82.08 | 88.70 | 74.42 | 94.22 | 64.89 | 79.28 | 82.13 | | Post-Pruning (10%) | 201 | 99.32 | 94.83 | 94.43 | 94.62 | 77.00 | 98.44 | 76.01 | 84.62 | 89.91 | | PERU-FFT (1%) | 121 | 98.32 | 79.85 | 85.12 | 82.89 | 71.22 | 89.30 | 59.79 | 75.33 | 80.23 | | PERU-FFT (5%) | 157 | 99.38 | 92.91 | 93.90 | 92.60 | 74.99 | 97.11 | 71.12 | 81.72 | 87.97 | | PERU-FFT (10%) | 201 | **99.56** | **97.34** | **96.91** | **95.30** | **77.11** | **98.67** | **77.77** | **85.04** | **90.96** | | LORA FT | | | | | | | | | | | Single-Task | 192 | 99.77 | 99.11 | 97.72 | 96.21 | 76.63 | 98.89 | 79.95 | 86.27 | 91.82 | | Task-Arithmetic | 112 | 95.59 | 63.06 | 77.30 | 72.92 | 66.05 | 82.67 | 49.04 | 64.46 | 71.38 | | Fisher-Merging | 112 | 94.51 | 61.19 | 73.22 | 75.24 | 68.57 | 80.41 | 50.74 | 69.93 | 71.73 | | RegMean | 112 | 97.89 | 68.73 | 85.26 | 76.30 | 68.17 | 91.96 | 52.66 | 70.54 | 76.44 | | TIES-Merging | 112 | 90.69 | 54.52 | 71.18 | 74.41 | 68.02 | 77.59 | 48.56 | 67.98 | 69.12 | | PERU-LoRA (4) | 114 | 99.35 | 93.96 | 95.52 | 88.65 | 72.21 | 96.81 | 69.73 | 71.05 | 85.91 | | PERU-LoRA (8) | 117 | 99.64 | 97.51 | 97.16 | 93.40 | 73.55 | 98.52 | 76.12 | 76.72 | 89.08 | | PERU-LoRA (16) | 122 | **99.66** | **98.54** | **97.61** | **95.25** | **75.54** | **98.78** | **78.72** | **81.88** | **90.75** | As for reusing LoRA fine-tuned models, we can see that PERU-LoRA (16) achieves comparable performance with Single-Task, but is more parameter-efficient (1.6× fewer parameters). Furthermore, compared with existing merging models methods, both PERU-LoRA (8) and PERU-LoRA (16) by a large margin, while PERU-LoRA (2) also has a higher accuracy, demonstrating that extracting a lower task-specific matrix from the LoRA matrix is effective. Compared with PERU-FFT (10%) and Post-Pruning (10%), PERU-LoRA (16) performs better but has 1.7× fewer parameters. Moreover, PERU-LoRA (16) achieves comparable performance with Single-Task (Fully FT) but has 7.4× fewer parameters, showing that reusing the LoRA fine-tuned models is very effective and parameter-efficient. Furthermore, compared with the Pre-Trained model, PERU-LoRA (16) uses only 10M more parameters but almost double the accuracy for ViT-B/32 and ViT-B/16 models. As for ViT-L/14, PERU-LoRA (16) uses only 26M parameters but achieves 1.4× higher accuracy than the Pre-Trained model. Table 3: Testing accuracy on eight tasks reusing fully/LoRA fine-tuned models using ViT-L/14. | #params (M) | MNI | GTS | SVH | RES | SUN | EUR | DTD | CAR | Avg | |------------|-----|-----|-----|-----|-----|-----|-----|-----|-----| | Pre-Trained | 343 | 76.36 | 50.55 | 58.45 | 71.05 | 68.28 | 62.41 | 55.32 | 77.73 | 65.02 | | Single-Task | 2,740 | 99.77 | 99.33 | 98.12 | 97.30 | 82.13 | 99.26 | 84.68 | 92.36 | 94.12 | | MTL | 343 | 99.63 | 99.07 | 97.57 | 96.32 | 80.84 | 99.19 | 84.36 | 90.64 | 93.45 | | Task-Arithmetic | 343 | 98.95 | 85.80 | 87.20 | 86.60 | 73.84 | 94.48 | 65.69 | 83.68 | 84.53 | | Fisher-Merging | 343 | 96.98 | 69.43 | 78.20 | 82.33 | 72.18 | 91.04 | 62.07 | 82.43 | 79.33 | | RegMean | 343 | 98.42 | 81.37 | 88.03 | 85.27 | 72.77 | 95.37 | 65.74 | 84.09 | 83.88 | | TIES-Merging | 343 | 99.01 | 81.34 | 89.42 | 89.49 | 76.18 | 95.96 | 68.24 | 86.83 | 85.81 | | Post-Pruning (1%) | 370 | 88.11 | 57.55 | 67.26 | 78.27 | 71.40 | 75.78 | 59.89 | 82.04 | 72.54 | | Post-Pruning (5%) | 480 | 99.07 | 84.66 | 87.85 | 92.75 | 77.40 | 97.48 | 72.02 | 88.96 | 87.52 | | Post-Pruning (10%) | 617 | 99.67 | 96.95 | 96.86 | 96.25 | 80.56 | 99.04 | 79.31 | 91.54 | 92.52 | | PERU-FFT (1%) | 370 | 99.17 | 90.67 | 90.99 | 89.62 | 75.55 | 96.30 | 69.36 | 86.06 | 87.21 | | PERU-FFT (5%) | 480 | 99.62 | 96.46 | 95.87 | 94.41 | 78.90 | 98.41 | 76.76 | 89.14 | 91.20 | | PERU-FFT (10%) | 617 | **99.74** | **98.43** | **97.43** | **96.37** | **80.79** | **98.93** | **80.53** | **90.72** | **92.87** | (a) TaskArith. (b) FisherMerg. (c) RegMean. (d) TiesMerg. (e) Post-Pruning. (f) PERU-FFT. Figure 3: t-SNE of samples from EuroSAT for methods reusing fully fine-tuned ViT-B/32 Models. Figure 3 visualize the t-SNE (Van der Maaten & Hinton, 2008) of embeddings extracted from 200 images (20 images per class) randomly sampled from EuroSAT for methods reusing fully fine-tuned ViT-B/32 models. As can be seen, both PERU-FFT (10%) and Post-Pruning (10%) have more compact and separable structures than existing merging models methods, demonstrating that injecting sparse task-specific vectors into the shared model is effective in extracting more discriminative features. Furthermore, clusters of PERU-FFT are denser than Post-Pruning. Figure 4 visualize the t-SNE of embeddings extracted from 200 images (20 images per class) randomly sampled from EuroSAT for methods reusing LoRA fine-tuned ViT-B/32 models. As can be seen, PERU-LoRA (16) has a more compact and separable structure than existing merging models methods, showing that using a lower rank to approximate the trained LoRA matrix (whose rank is 128) is effective in extracting discriminative features for classification. 4.2 Experiments on Natural Language Process Tasks We conduct experiments on four standard text classification data sets: MRPC (Dolan et al., 2004), RTE (Wang et al., 2018), SST-2 (Socher et al., 2013), and QNLI (Wang et al., 2018). We adopt Flan-T5-base (Chung et al., 2022) as the model for text classification. Figure 4: t-SNE of samples from EuroSAT for methods reusing LoRA fine-tuned ViT-B/32 Models. Table 4 shows the testing accuracy. As can be seen, for reusing fully fine-tuned models, by keeping top-10% values, both PERU-FFT and Post-Pruning achieve comparable performance with Single-Task, but are much more parameter-efficient (2.8× fewer parameters). Furthermore, PERU-FFT outperforms Task-Arithmetic, showing that introducing sparse task-specific vectors to the merged model is better. Compared with Post-Pruning, PERU-FFT is better, demonstrating that merging models is effective in extracting shared knowledge before pruning task vectors. In particular, PERU-FFT with top-5% is better than Post-Pruning with top-10%. Hence, performing merging models is useful before extracting sparse task-specific vectors. As for reusing LoRA fine-tuned models, PERU-LoRA with $q = 8$ or 16 achieves almost the same performance as Single-Task (LoRA FT) but has fewer parameters. Furthermore, PERU-LoRA outperforms existing merging methods by a large margin. Moreover, the performance of PERU-LoRA with $q = 8$ is close to that of Single-Task (Full FT) but is much more parameter-efficient (3.9× fewer parameters). ### Table 4: Testing accuracy on four tasks reusing fully/LoRA fine-tuned models using Flan-T5-base. | | #params (M) | MRPC | RTE | SST-2 | QNLI | Avg | |------------------|-------------|------|------|-------|------|------| | Pre-Trained | 225 | 75.33| 57.04| 52.64 | 66.59| 62.90| | Single-Task | 894 | 89.30| 79.06| 94.72 | 93.00| 89.02| | Task-Arithmetic | 225 | 82.29| 73.29| 93.23 | 88.16| 84.24| | Fisher-Merging | 225 | 80.61| 70.04| 92.66 | 85.63| 82.23| | RegMean | 225 | 84.52| 76.53| 92.55 | 91.16| 86.19| | TIES-Merging | 225 | 86.70| 74.73| 93.23 | 84.13| 84.70| | Fully FT | | | | | | | | Post-Pruning (1%)| 234 | 75.52| 62.45| 69.72 | 81.90| 72.40| | Post-Pruning (5%)| 270 | 81.23| 68.23| 92.66 | 90.28| 83.10| | Post-Pruning (10%)| 314 | 86.26| 77.62| 94.04 | 91.69| 87.40| | PERU-FFT (1%) | 234 | 83.62| 75.81| 93.81 | 89.86| 85.77| | PERU-FFT (5%) | 270 | 86.63| 78.34| 94.04 | 91.43| 87.61| | PERU-FFT (10%) | 314 | **87.58**| **78.70**| **94.27**| **91.84**| **88.10**| | Single-Task | 239 | 87.47| 79.06| 94.04 | 92.70| 88.32| | Task-Arithmetic | 225 | 81.52| 72.92| 92.43 | 86.78| 83.42| | Fisher-Merging | 225 | 80.92| 72.92| 92.09 | 85.28| 82.80| | RegMean | 225 | 82.00| 75.09| 92.20 | 90.68| 84.99| | TIES-Merging | 225 | 83.47| 65.34| 92.32 | 82.92| 81.01| | LoRA FT | | | | | | | | PERU-LoRA (4) | 227 | 87.24| 77.26| 93.81 | 92.51| 87.70| | PERU-LoRA (8) | 229 | **87.64**| **78.70**| **93.92**| **92.53**| **88.20**| | PERU-LoRA (16) | 232 | 86.82| 79.42| 94.04 | 92.55| 88.21| ### 4.3 Usefulness of Integrating PERU-FFT into Existing Merging Methods The proposed PERU-FFT is general and can be combined with any existing merging models methods. In Section 4.1, we use Task-Arithmetic as $A_{\text{merging}}$ in Algorithm 1. We conduct additional experiments using the setting with ViT-B/32 to verify the benefits of integrating PERU-FFT into any other merging models methods. Figure 5 shows the testing accuracy (detailed results are shown in Table 5 of Appendix B.1). As can be seen, PERU-FFT consistently boosts the performance of existing methods by a large margin (Task-Arithmetic, Fisher-Merging, RegMean, TIES-Merging). 4.4 Effects of \( q \) on PERU-LoRA We perform experiments to study the effects of rank \( q \) on the testing accuracy of PERU-LoRA using the settings in Section 4.1. Figure 6 shows the testing accuracy (averaged over eight tasks) w.r.t. \( q \). As can be seen, increasing \( q \) leads to a better performance. Furthermore, PERU-LoRA with rank-40 achieves almost the same performance as Single-Task (LoRA Fine-Tuned). Hence, using a lower-rank matrix to approximate the trained LoRA matrix is effective and more parameter-efficient. 4.5 Effects of \( m\% \) on Post-Pruning and PERU-FFT In this section, we conduct experiments to study the effects of \( m\% \) on the performance of Post-Pruning and PERU-FFT using the settings in Section 4.1. Figure 7 shows the testing accuracy (averaged over eight tasks) w.r.t. \( m\% \in [0\%, 40\%] \) using ViT-B/32, ViT-B/16, and ViT-L/14. As can be seen, the accuracy of Post-Pruning and PERU-FFT increase when \( m\% \) increases. When \( m\% \) is larger than 20\%, their accuracies reach the Single-Task performance and saturates. As for \( m\% \leq 10\% \), PERU-FFT always performs better than Post-Pruning, suggesting that merging models before pruning is important when pruning most parameters. 5 Conclusion In this paper, we studied the problem of reusing fine-tuned models. We proposed two parameter-efficient methods: (i) PERU-FFT for reusing fully fine-tuned models by injecting sparse task-specific vectors into the merged model; and (ii) PERU-LoRA for reusing LoRA fine-tuned models by using a lower rank matrix to approximate the LoRA matrix. Extensive experiments on computer vision and natural language processing tasks demonstrate that PERU-FFT and PERU-LoRA outperform existing merging methods significantly. Additionally, the proposed methods achieve comparable performance to Single-Task fine-tuned models but are much more parameter-efficient. Moreover, PERU-FFT is general and can be combined with any existing merging algorithms to boost performance. REFERENCES Yanda Chen, Ruiqi Zhong, Sheng Zha, George Karypis, and He He. Meta-learning via language model in-context tuning. In Annual Meeting of the Association for Computational Linguistics, 2022. Gong Cheng, Junwei Han, and Xiaoqiang Lu. Remote sensing image scene classification: Benchmark and state of the art. In Proceedings of the Institute of Electrical and Electronics Engineers, 2017. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. Preprint arXiv:2210.11416, 2022. Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. Describing textures in the wild. In IEEE Conference on Computer Vision and Pattern Recognition, 2014. Bill Dolan, Chris Quirk, and Chris Brockett. Unsupervised construction of large paraphrase corpora: exploiting massively parallel news sources. In International Conference on Computational Linguistics, 2004. Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. Multi-task learning for multiple language translation. In Annual Meeting of the Association for Computational Linguistics, 2015. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Neural Information Processing Systems, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2016. Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. EuroSAT: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2019. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Neural Information Processing Systems, 2020. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for NLP. In International Conference on Machine Learning, 2019. Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2022. Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Ludwig Schmidt, Hannaneh Hajishirzi, and Ali Farhadi. Editing models with task arithmetic. In International Conference on Learning Representations, 2023. Weisen Jiang, Yu Zhang, and James Kwok. Effective structured-prompting by meta-learning and representitive verbalizer. In International Conference on Machine Learning, 2023. Xisen Jin, Xiang Ren, Daniel Preatiuc-Pietro, and Pengxiang Cheng. Dataless knowledge fusion by merging weights of language models. In International Conference on Learning Representations, 2023. Alex Kendall, Yarin Gal, and Roberto Cipolla. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In IEEE Conference on Computer Vision and Pattern Recognition, 2018.
ufvwhR3XmN
The tradeoff study between temporal context and spectral context is not able to lead such conclusion that higher frequency domain resolution provideds more benefits compared higher time domain resolution, as the results of these two setting are very close in the test set (20.80 vs. 20.66).
A JOINT SPECTRO-TEMPORAL RELATIONAL THINKING BASED ACOUSTIC MODELING FRAMEWORK Anonymous authors Paper under double-blind review ABSTRACT Relational thinking refers to the inherent ability of humans to form mental impressions about relations between sensory signals and prior knowledge, and subsequently incorporate them into their model of their world. This ability plays a key role in human understanding of speech, yet it has not been a prominent feature in any artificial speech recognition systems. Recently, there have been some attempts to correct this oversight, but these have been limited to coarse utterance-level models that operate exclusively in the time domain. In an attempt to narrow the gap between artificial systems and human abilities, this paper presents a novel spectro-temporal relational thinking based acoustic modeling framework. Specifically, it first generates numerous probabilistic graphs to model the relations among consecutive speech segments across both time and frequency domains. These graphs are then coupled and transformed into latent representations for downstream tasks, during which meaningful spectro-temporal patterns formed by the co-occurrence of certain node pairs can be uncovered. Models built upon this framework outperform state-of-the-art systems with a 7.82% improvement in phoneme recognition tasks. In-depth analyses further reveal that our proposed relational thinking modeling mainly improves the model’s ability to recognize vowel phonemes. 1 INTRODUCTION Deep learning techniques have brought substantial advancements into automatic speech recognition (ASR), making it one of the most promising means of human-machine communication (Hinton et al., 2012). However, most deep neural network (DNN) based speech recognition systems (Vinyals et al., 2012; Abdel-Hamid et al., 2014; Chan et al., 2016; Passricha & Aggarwal, 2019; Wang et al., 2020; Baevski et al., 2020; Gulati et al., 2020) have drawn limited inspiration from the way speech is processed and recognized by human brain (Bohnstingl et al., 2022), instead treating the process as a black-box. As a consequence, the performances of these systems still lag behind that of the human brain (Malik et al., 2021). Recognizing the limitations inherent in current artificial systems, recent researches have endeavored to integrate biologically inspired mechanisms into existing DNN based systems, seeking to enhance interpretability and narrow the gap between artificial systems and the human brain (Dong & Xu, 2020; Bohnstingl et al., 2022). Human minds are constantly and unconsciously filled with innumerable mental impressions pertaining to relations between current sensory signals and prior knowledge while hearing, seeing, smelling, etc. (Peirce, 2012). These mental impressions (i.e., percepts) are subsequently coupled and transformed into generalized understandings (i.e., concepts) (Mandler, 2007). This process, termed as relational thinking, is a fundamental human learning process that enables discerning meaningful patterns within the continuous flow of sensory data (Alexander, 2016). While humans rely on this inherent mechanism for speech comprehension (Birjandi & Sabah, 2012), artificial speech recognition systems have rarely employed it. The majority of the state-of-the-art systems, e.g., wav2vec2 (Baevski et al., 2020), have been developed using the transformer architectures (Vaswani et al., 2017), which basically employ attention mechanisms to capture dependencies between different parts of the sequence. However, these systems do not explicitly comprehend the relational information inherent in the sequence in the same way as the human brain. The attention mechanisms actually assess the significance of different parts of the sequential input when producing each entry of the output, allowing the model to focus on only pertinent information, as illustrated by Fig. 1 (a). In contrast, relational thinking captures the inherent relationships and interactions between various pair-wise elements or features within the input sequence and estimates each entry of the output by aggregating all the pair-wise information, as shown in Fig. 1 (b). The information captured through relational thinking thus places a greater emphasis on the implications rooted in the co-occurrence of pairs of informative elements. This proves particularly beneficial to speech recognition, as certain pairs tend to appear jointly, for instance, the phonemes /m/ and /ɪ/ (“me”, “autonomy”, etc.). However, such knowledge is not intrinsically captured by the attention mechanism prevalent in most current systems. One of the few examples of the use of relational thinking models formulated this process in a conversational speech recognition scenario (Huang et al., 2020); the relational information acquired during the process was utilized as an additional input for the recognition task. However Huang et al. (2020) only investigated utterance-level relational information in conversational scenarios. In a distinct yet highly relevant realm, natural language processing, Xue et al. (2021) proposed an approach to relation extraction which focused on extracting relations between words. Both Huang et al. (2020) and Xue et al. (2021) modeled the relations either at the utterance-level or the word-level. However, humans process speech and language at the more granular level of phonemes (Dusan & Rabiner, 2005; Wingfield et al., 2017). Furthermore, existing works have modeled the relations among elements of the input sequence separated in time only, whereas humans process speech by jointly considering multiple domains (e.g., time, frequency, semantics, etc.) rather than focusing exclusively on relationships in the time domain (Jurafsky & Martin, 2000). In this paper, we propose a novel joint spectro-temporal relational thinking based acoustic modeling framework. The novelty lies in four aspects. 1) In contrast to previous approaches that focused solely on temporal patterns, the proposed framework captures relations across both time and frequency domains of the sensory input (as illustrated by Fig. 1 (c)) using a collection of probabilistic graphs, and then transform the relational information involved in the graphs into a form that can be used by downstream tasks. 2) Our approach tackles real-world scenarios where the input and output sequences differ in length. To facilitate the training of the proposed framework, we develop a tractable loss that optimizes the variational lower bound for the model log-likelihood. 3) Models built upon our proposed framework outperform the state-of-the-art baseline, demonstrating a performance gain of up to 7.82% in phoneme recognition tasks. Further analysis shows that the performance gain primarily originates from the model’s enhanced ability to recognize vowels. This enhancement mirrors human proficiency in recognizing vowels more readily than consonants (Meyer et al., 2006). We also uncover the relevance of the captured relations to phoneme groups, where the patterns involved in the relations exhibit more similarities for phoneme classes within the same group. Additionally, the generalizability of the proposed framework is validated by employing other types of acoustic features (e.g., MFCCs), where relational thinking modeling consistently benefits downstream tasks. 4) We theoretically analyze the differences between self-attention mechanisms and relational thinking. These details are provided in Appendix D for those interested in further exploration. 2 MODELING RELATIONAL THINKING Previous relational thinking approaches have employed graphs to model relationships between entries (or time steps) of a sequence, where each entry has been regarded as a node in the graphs. The goal of such approaches is to capture meaningful pair-wise patterns over time using these graphs (as illustrated by Fig. 1 (b)), and then aggregate and transform the relational information involved in the graphs into a latent form that can be interpreted by subsequent layers of the model. Consider a sensory input $H = [h_1, \ldots, h_T]$ corresponding to $T$ time steps. As illustrated by Fig. 2, the relational thinking process is carried out via the following three steps (Huang et al., 2020): 1) Perception: We first construct an infinite number of graphs $\{G^{(k)}\}_{k=1}^{\infty}$, where $G^{(k)}(V^{(k)}, E^{(k)})$ is the $k$-th percept graph, with $V^{(k)}$ and $E^{(k)}$ denoting the node set and edge set, respectively. Each $h_i \in \mathbb{R}^{D_h}, i = 1, \ldots, T$ corresponds to a node $v_i^{(k)}$ in every percept graph $G^{(k)}$, while each element \( \alpha_{i,j}^{(k)} \) of the adjacency matrix \( A^{(k)} \) is associated with an edge \( e_{i,j}^{(k)} \in E^{(k)} \) between a pair of nodes \( (v_i^{(k)}, v_j^{(k)}) \) of \( G^{(k)} \). The value of \( \alpha_{i,j}^{(k)} \) indicates the significance of the co-occurrence of node pair \( (v_i^{(k)}, v_j^{(k)}) \). Since the percepts form at an unconscious level of awareness (Rapp & Braasch, 2023), we assume that the probability of an edge’s existence within the percept graphs is close to zero. To model this characteristic, we let the edge weights for the percept graphs follow a set of Bernoulli distributions, i.e., \( \{ \alpha_{i,j}^{(k)} \}_{k=1}^{\infty} \sim \text{Bern}(\lambda_{i,j}) \), where the probability of edge existence \( \lambda_{i,j} \to 0 \). 2) Coupling: To aggregate the infinite number of percept graphs \( \{ G^{(k)} \}_{k=1}^{\infty} \), coupling is performed to derive an equivalent summary graph \( \tilde{G} \). In this graph, the original nodes \( h_1, \ldots, h_T \) are preserved, while each edge \( \tilde{\alpha}_{i,j} \) is obtained by summing up the corresponding edges over all percept graphs. 3) Transformation: Transformation converts the innumerable unconscious percepts into a recognizable notion of knowledge. Specifically, we first transform the summary graph \( \tilde{G} \), which represents the infinite number of percept graphs, into a task-specific graph \( G \) by introducing transformation variables \( s_{i,j} \) for each edge \( \tilde{\alpha}_{i,j} \), such that \( \tilde{\alpha}_{i,j} = s_{i,j} \tilde{\alpha}_{i,j} \). Next, from \( G \) we abstract a graph embedding \( r \) by summing up the embeddings of all node pairs weighted by \( \tilde{\alpha}_{i,j} \) as \( \sum_{(i,j) \in \{(i,j)|i<j,(i,j) \in \mathcal{E}\}} \tilde{\alpha}_{i,j} f_\theta(h_i, h_j) \). \( r \) is then ready for use in a specific downstream task. The above described relational thinking modeling offers the unique ability to capture the co-occurrence of entries within the sensory input. The additional knowledge acquired during this process, which is not available from attention mechanisms (Vaswani et al., 2017), leads to further enhancement of the downstream task. More details about the modeling of relational thinking and the differences between relational thinking and attention mechanism are provided in Appendix C and Appendix D, respectively. 3 Proposed Spectro-Temporal Relational Thinking Framework To exploit the range of information that are more readily accessible from different domains, (e.g., time domain, frequency domain, etc.), we propose a framework that models this process jointly across both the time and frequency domains (and more generally across the dimensions of any acoustic representation). This will enable a more comprehensive description of speech signals. 3.1 Spectro-Temporal Relational Thinking Based Acoustic Modeling The structure of the proposed acoustic modeling framework is depicted in Fig. 3. Given the raw waveform of a speech, we first employ the feature extraction module to calculate the acoustic feature vectors \( c_t \in \mathbb{R}^{D_c}, t = 1, \ldots, T \) corresponding to each of the time steps. Then, we re-organize them into a set of feature maps \( C = \{ C_1, \ldots, C_T \} \) by forming each feature map with the current and the previous \( w - 1 \) time steps as \( C_t = [c_{t-w+1}, \ldots, c_t] \), guaranteeing the incorporation of causality. \(^1\)In a slight abuse of terminology, we refer to the feature space, in which \( c_1, \ldots, c_T \) exist, as a frequency domain, although \( c_t \) can be an arbitrary type of acoustic feature. $C_t$ is subsequently used as the sensory input for relational thinking modeling of time step $t$. For time steps with $t < w$, specifically, $C_t$ is padded with $0 \in \mathbb{R}^{D_c}$ such that all feature maps $C_t, \forall t$ have the identical dimension of $D_c \times w$. For the relational thinking module, every $C_t$ is first smoothed and sub-sampled as $$\tilde{C}_t = \Xi(C_t),$$ where $\Xi$ denotes a filtering operator. The function of $\Xi$ is to adjust the dimension of the original feature map $C_t$, such that the resultant $\tilde{C}_t$ has a dimension suitable for the subsequent spectro-temporal relational thinking modeling. Next, $\tilde{C}_t$ is divided into a number of sub-feature maps as $$\tilde{C}_t = \begin{bmatrix} \Lambda_{t,1,1} & \cdots & \Lambda_{t,1,D(t)} \\ \vdots & \ddots & \vdots \\ \Lambda_{t,D(f),1} & \cdots & \Lambda_{t,D(f),D(t)} \end{bmatrix} \in \mathbb{R}^{D_c \times \tilde{w}},$$ where every one of the total $u = D(f) \times D(t)$ sub feature maps $\Lambda_{t,d(f),d(t)} \in \mathbb{R}^{D_c \times \tilde{w}}$ is ready to be mapped to a node within the percept graphs $G_t^{(k)}$. As for the filtering $\Xi$ in (1), we explain its necessity with the example in Fig. 4. For the perception step of time domain modeling illustrated by Fig. 4 (a), each $c_i$ from a time step can be directly mapped to a node in the percept graphs, with the number of nodes in a graph corresponding to the number of time steps ($w = 7$) included in $C_t$. However, as per the spectro-temporal modeling, each node in the percept graphs encompasses information in both the time and frequency domains, spanning over $\tilde{w}$ and $D_s$, respectively. As illustrated by Fig. 4 (b), given $D_c = 6$, $w = 7$, and $u = 6$, it is not possible to evenly divide the $6 \times 7$ feature map $C_t$ into 2 rows and 3 columns, or 3 rows and 2 columns of sub-feature maps in the two red blocks in the figure. As a result, adjustments for the dimension of the original feature map $C_t$ is necessary. We implement $\Xi$ with a temporal convolution in the proposed framework. Furthermore, we can define $\Delta_t := \tilde{w}/\tilde{w}_s$, $\Delta_f := D_c/D_s$, $\Delta_t, \Delta_f \in \mathbb{N}^+$ as the resolutions of relational thinking modeling in time and frequency domains, respectively. A higher resolution $\Delta_t$ or $\Delta_f$ indicates a more fine-grained capture of relations across the corresponding domain. By sequentially performing the perception, coupling, and transformation steps (as described by (7)–(12) in Appendix C) toward $\tilde{C}_t$ for each time step $t$, we can obtain a sequence of graph embeddings $r_1, \ldots, r_T$. By concatenating each $r_t$ with the corresponding acoustic feature vector $c_t$, we then obtain a more comprehensive speech representation $$\tilde{c}_t = [c_t^T, r_t^T]^T$$ for each time step. The sequence of the concatenated representations $\tilde{c}_1, \ldots, \tilde{c}_T$ is lastly fed into a prediction network (e.g., a linear projection) for the ultimate recognition task. In the proposed spectro-temporal relational thinking modeling, a node pair refers to a spectro-temporal pattern formed by the co-occurrence of two sub feature maps within an interval (i.e., the temporal span covered by a sub-feature map $\Lambda_{t,d(f),d(t)}$) or across intervals. Therefore, by incorporating both time and frequency domains, we are able to capture not only the relations between time intervals, but also the relations across different frequency bands within an interval or across intervals. When modeling relational thinking for each time step $t$, it is essential to work with a feature map $C_t$ that has a sufficiently wide context (i.e., with a sufficiently large $w$). This ensures that there is enough local context available for effective relational thinking modeling. Therefore, in the proposed relational thinking framework, we take into account a context spanning at least 3 consecutive phonemes. This is in line with tri-phone models employed in HMM based acoustic models in the past (Jurafsky & Martin). Also note that for a given number of nodes \( u \) to be included in the graphs, there exist multiple choices for the resolutions \((\Delta_t, \Delta_f)\). As illustrated by the two solutions for the example in Fig. 4 (b), given \( u = 6 \), we can obtain either \((\Delta_t, \Delta_f) = (3, 2)\) or \((\Delta_t, \Delta_f) = (2, 3)\) for the spectro-temporal perception. Additionally, we can obtain two more solutions for the resolutions, i.e., \((6, 1)\) and \((1, 6)\), which in fact correspond to the temporal-only modeling and spectro-only modeling within a single domain. Variations in the resolution settings can have different effects on the performance of the downstream task. This aspect will be discussed in detail in Section 5.1. ### 3.2 Training Relational Thinking based Models For sequence modeling tasks like speech recognition, a common challenge arises from the varying lengths of the input and output sequences. This requires the loss function capable of managing such variations in sequence lengths. While Huang et al. (2020) and Chung et al. (2015) only considered the scenarios where the input and output sequences have equal lengths, our proposed spectro-temporal relational thinking framework is designed to handle more general scenarios where input and output sequences can have varying lengths. However, a tractable loss function is required to enable the training of our proposed framework. Given the complexity introduced by the random processes governing the generation of the graph edges (as detailed by (7)–(12) in Appendix C), direct optimization of the model log-likelihood \( \log p(y|C) \) is infeasible. Instead, we employ the variational lower bound \( L \) (Sohn et al., 2015), by optimizing which log-likelihood can be also maximized: \[ \log p(y|C) \geq \mathbb{E}_{q(\tilde{A}, S|C)}[\log p(y|C, \tilde{A}, S)] - \text{div}(q(\tilde{A}, S|C)||p(\tilde{A}, S|C)) = L, \] where \( \text{div}(\cdot||\cdot) \) represents the KL divergence. In our proposed framework, we have two sets of variational latent variables that require optimization: \( \tilde{A} = \{\tilde{A}_1, \ldots, \tilde{A}_T\} \) and \( S = \{S_1, \ldots, S_T\} \), representing the adjacency matrices of the summary graphs and the graph transformation variable matrices for all time steps, respectively. \( q(\tilde{A}, S|C) \) denotes the approximate posterior for \( p(\tilde{A}, S|C, y) \), while \( p(\tilde{A}, S|C) \) represents the prior. For the case where input and output sequences have equal lengths (Huang et al., 2020; Chung et al., 2015), the prediction term in (4) can be decomposed into a frame-wise form as \( \mathbb{E}_{q(\tilde{A}, S|C)}[\log p(y|C, \tilde{A}, S)] = \sum_{t=1}^{T} \mathbb{E}_{q(\tilde{A}, S|C)}[\log p(y_t|C_t, \tilde{A}, S)] \). However, it does not generalize to the case where input and output sequences are of different lengths. This forces us to recover \( y \) using \( C, \tilde{A}, S \) throughout all time steps (see (17) in Appendix E). On the other hand, according to Nan et al. (2023), since \( p(\tilde{A}, S|C) = \prod_{t=1}^{T} p(\tilde{A}_t, S_t|C_t) \), the KL divergence term in (4) can be decomposed as \( \sum_{t=1}^{T} \text{div}(q(\tilde{A}_t, S_t|C_t)||p(\tilde{A}_t, S_t|C_t)) \), where \( q(\tilde{A}_t, S_t|C_t) \) and \( p(\tilde{A}_t, S_t|C_t) \) denote the approximate posterior and prior for time step \( t \), respectively. Given that each element \( s_{i,j}^{(t)} \) of \( S_t \) is conditioned on the Binomial variable \( \tilde{\alpha}_{i,j}^{(t)} \) for the same edge of the \( t \)-th summary graph \( \tilde{G}_t \) (as indicated by (11) in Appendix C), we can further derive the KL divergence term for each time step \( t \) as \[ \text{div}(q(\tilde{A}_t, S_t|C_t)||p(\tilde{A}_t, S_t|C_t)) = \sum_{(i,j) \in \tilde{E}_t} \{ \text{div}(q(\tilde{\alpha}_{i,j}^{(t)}|C_t)||p(\tilde{\alpha}_{i,j}^{(t)}|C_t)) \\ + \mathbb{E}_{q(\tilde{\alpha}_{i,j}^{(t)}|C_t)}[\text{div}(q(s_{i,j}^{(t)}|\tilde{G}_{i,j}^{(t)}, C_t)||p(s_{i,j}^{(t)}|\tilde{G}_{i,j}^{(t)}, C_t))]\}. \] More details on the training of the proposed framework are provided in Appendix E, where we obtain a computationally tractable form of loss function that optimizes (4). ### 4 Experimental Settings **Goals** To gain insights into how the proposed model could aid downstream tasks, we aim to answer the following five questions: **Q1:** Does the proposed joint spectro-temporal modeling provide additional information that further benefits downstream tasks when compared to pure temporal or spectral modeling? Q2: Is it more beneficial to model a larger context in the time domain or frequency domain? Q3: Does the temporal span for relational thinking modeling affect the model’s performance in downstream tasks? Q4: Does relational thinking provide additional benefits beyond what the attention mechanism has achieved for downstream tasks? Q5: Does the proposed framework consistently offer advantages across different types of acoustic features? Dataset We evaluate our proposed acoustic modeling framework in a general phoneme recognition downstream task. The TIMIT dataset (Garofolo et al., 1993) is used for training and evaluation, since it provides precise annotations for the start and end instants of each phoneme within an utterance, allowing for comprehensive analyses that lead to in-depth understanding of the proposed models. To recover the target phoneme sequence \( y \), we use the best path decoding method (Graves et al., 2006). The phoneme error rate (PER) is employed for system evaluation. Settings We employ the pre-trained wav2vec2 BASE (Baevski et al., 2020) for feature extraction and apply the proposed relational thinking modeling on top of it. Given that the majority (over 96%) of 3 consecutive phonemes in TIMIT has a duration shorter than 400 ms, we let \( C_t \) consist of 20 consecutive frames (i.e., \( w = 20 \), spanning a duration of 405 ms), such that the relations associated with 3 consecutive phonemes can be modeled. We include 8 nodes in each percept graph, i.e., \( u = 8 \). To answer Q1, we explore the time-only model w20-t8f1, the frequency-only model w20-t1f8, and the joint spectro-temporal models w20-t2f4 and w20-t4f2, by manipulating the resolutions for time and frequency domains as described in Section 3, with the naming of the models following the format \( ww\cdot t\Delta_t f\Delta_f \). To address Q3, we include another model w8-t2f4, with a different temporal span \( w = 8 \) for relational thinking modeling. For Q5, the effectiveness of the proposed model is further validated using MFCCs. Additional implementation details are available in Appendix F. 5 EXPERIMENTAL RESULTS AND ANALYSES 5.1 PHONEME RECOGNITION PERFORMANCE A1: Temporal vs. Spectral vs. Spectro-temporal The performances of different models are compared in Table 1. We first fix the pre-trained parameters within the wav2vec2 module to eliminate the impact of variations in acoustic features. As shown in Table 1, the two joint spectro-temporal models, w20-t4f2 and w20-t2f4, outperform the temporal and spectral models, w20-t8f1 and w20-t1f8. This comparison clearly demonstrates the advantage of joint spectro-temporal modeling over the temporal or spectro modeling within a single domain. It is also evident that all the proposed relational thinking models (with 100.8M parameters in total) outperform the wav2vec2 baseline (with 94.4M parameters in total), with a relative reduction in PER ranging from 11.17% to 19.61%. | Model | PER (%) dev | PER (%) test | |----------------|-------------|--------------| | baseline | 17.92 | 25.70 | | wav2vec2 | | | | w20-t8f1 | 19.32 | 22.83 | | w20-t1f8 | 16.14 | 21.76 | | proposed | | | | w20-t4f2 | 17.31 | 20.80 | | w20-t2f4 | 14.02 | 20.66 | | w8-t2f4 | 18.89 | 22.93 | A2: Trading off Temporal Context against Spectral Context We compare the models with a higher resolution in frequency domain to those with a higher resolution in time domain. Specifically, two sets of models, {w20-t1f8, w20-t8f1} which model relations within a single (time or frequency) domain, and {w20-t2f4, w20-t4f2} which model relations in both time and frequency domains, are respectively compared. As illustrated by Table 1, in both sets, the model with higher frequency domain resolution (w20-t2f4 or w20-t4f2) exhibit superiority over its counterpart with higher time domain resolution (w20-t4f2 or w20-t8f1). This suggests that there might be potential benefits in modeling relations across frequency bands in greater detail by setting a higher frequency domain resolution compared to focusing more on time domain relations. Figure 5: Relations learned by spectro-temporal relational thinking. Relational thinking evaluates the importance of the co-occurrence of a pair of nodes, representing a novel type of information beyond the assessment of individual nodes as typically done by attention mechanism. A pair of nodes is of more importance when the edge connecting them attains a larger value of $\tilde{\alpha}_{i,j}^{(t)}$, as indicated by the arrows. A3: Impact of Temporal Span To further understand the impact of the temporal span for relational thinking modeling, i.e., the value of $w$ for every $C_t$, on the performance of downstream task, we compare two proposed models with relational thinking modeled throughout 20 and 8 consecutive time steps, respectively. In other words, relational thinking is performed throughout temporal spans corresponding to triphones and monophones in the two models, respectively. We set the time and frequency resolutions to (2, 4) for both models. As shown in Table 1, the w8-t2f4 model, which incorporates relational information singly associated with the current phoneme at each time step, outperforms the wav2vec2 baseline with a 10.78% reduction in PER. However, when compared to the w20-t2f4 model, which incorporates relational information associated with the current and 2 preceding phonemes, it suffers a 10.99% drop in performance. This suggests that certain spectro-temporal patterns associated with consecutive phonemes contribute to further improving the prediction performance for the current phoneme. A4: Comparison with SOTA The proposed models are compared with the transformer (more essentially, self-attention mechanism) based wav2vec2 baseline (Baevski et al., 2020) and other state-of-the-art systems (Zeghidour et al., 2018; Ravanelli et al., 2020; 2018; Schneider et al., 2019; Baevski et al., 2019) in Table 2. For the proposed models and the baseline, the (wav2vec2) feature extraction module is jointly optimized (i.e., fine-tuned) during training. Our proposed spectro-temporal models, w20-t4f2 and w20-t2f4, significantly outperform all the counterparts, specifically yielding 7.21% and 7.82% relative improvements of PER over the wav2vec2 baseline in the test dataset, respectively, revealing the additional advantages offered by relational thinking modeling compared to self-attention mechanism in enhancing speech representation. A5: Generalization to Other Acoustic Features We also train a relational thinking based model using MFCCs (referred to as MFCC-RT-w20-t2f4) and compare it with an MFCC baseline implemented with a simple linear projection. Detailed configurations of the two models can be found in Appendix F.2. As shown in Table 3, the proposed MFCC-RT-w20-t2f4 model significantly outperforms the MFCC baseline, achieving a 14.36% reduction in PER over the test set. This validates that our proposed relational thinking modeling can generalize to sequential inputs composed of various types of acoustic features, providing additional relational information that consistently benefits downstream tasks. 5.2 Learned Relational Information The proposed relational thinking modeling enables us to infer relationships amongst different regions of the feature map without using any prior relational annotations during training. To illustrate the learned relational information, we randomly select a sample from the TIMIT test set, feed it into | Table 2: Phoneme recognition performances of baselines and proposed models in terms of PER (%) over TIMIT dataset. | |---------------------------------------------------------------| | | dev | test | | CNN + TD-filterbanks (Zeghidour et al., 2018) | 15.6 | 18.0 | | PASE+ (Ravanelli et al., 2020) | – | 17.2 | | Li-GRU (MLLR (Ravanelli et al., 2018) | – | 14.9 | | wav2vec2 (Schneider et al., 2019) | 12.9 | 14.7 | | vq-wav2vec (Baevski et al., 2019) | 9.6 | 11.6 | | wav2vec2 (Baevski et al., 2020) | 7.26 | 9.98 | | proposed | | | | w20-t4f2 | 6.18 | 9.26 | | w20-t2f4 | 6.23 | 9.20 | | Table 3: Phoneme recognition performances of baseline and proposed model using MFCCs. | |---------------------------------------------------------------| | | dev | test | | baseline | MFCC | 39.80 | 47.90 | | proposed | MFCC-RT-w20-t2f4 | 39.58 | 41.02 | Figure 6: Proportions of recognized phoneme classes by baseline and proposed w20-t2f4 model. Ground truth reveals the actual proportions of all phoneme classes in the TIMIT test set. The proportions of vowel classes recognized by the proposed model align more closely with the ground truth proportions, suggesting the proposed model’s better performance in recognizing vowels. a proposed relational thinking model, and visualize the inferred task-specific graphs for 4 consecutive time steps out of the total $T$ time steps in Fig. 5. In each graph, a red curve represents an edge $\tilde{\alpha}_{i,j}^{(t)}$ that connects a specific pair of sub feature maps from $\tilde{C}_t$. The intensity of an edge’s color corresponds to the regularized value of $\tilde{\alpha}_{i,j}^{(t)}$ ranging from 0 to 1. For each time step, the respective task-specific graph clearly reveals the intricate relations among different sub feature maps of $\tilde{C}_t$. As can be observed, all the task-specific graphs are relatively sparse, with only a few edges having large values of $\tilde{\alpha}_{i,j}^{(t)}$ (as indicated in the figure). This observation aligns with our discussion in Section 3.1, indicating that certain spectro-temporal patterns (i.e., the co-occurrence of certain sub feature maps) are more important to the ultimate task than many others which are less meaningful. We further explore the captured relational information by analyzing the edges $\tilde{\alpha}_{i,j}^{(t)}$ of the learned task-specific graphs across different phoneme sub-groups (e.g., vowels, fricatives, nasals, etc.) in the frame-wise phoneme classification tasks. As shown in Fig. 12, the captured relations between different regions of the feature map, i.e., the edges $\tilde{\alpha}_{i,j}^{(t)}$ of the task-specific graph, exhibit more similarities for phoneme classes within the same sub-group. However, the captured relations between phoneme classes from different sub-groups vary significantly. This suggests that the proposed model can discern and learn the intrinsic characteristics of various phoneme classes. More details can be found in Appendix G.2. 5.3 Analysis of Performance of Different Phoneme Groups We carry out additional analyses to gain a deeper understanding of how the proposed models enhance phoneme recognition performance. It is expected that the proposed relational thinking based models demonstrate superior advantages over the baseline in recognizing vowel phonemes. This is because vowel phonemes tend to have longer durations than non-vowel phonemes, allowing the relational thinking module to capture more significant relational information, which in turn benefits the downstream task. To this end, we separately investigate the performances of the models in recognizing vowels and non-vowels. Intuitively, a good recognizer should produce a distribution of recognized phoneme classes that closely matches the ground truth distribution. To assess this, we calculate the proportion of each phoneme class among all the recognized phonemes in the test set for both the wav2vec2 baseline and the proposed w20-t2f4 model. These proportions are depicted in Fig. 6, where the ground truth proportions of phoneme classes in the test set are also provided. As can be observed, there are significant differences in the distributions of vowel classes (e.g., /ah/, /aw/, /er/, /ey/, /ih/) recognized by the two models, especially for the classes circled out in Fig. 6. In particular, the proportions of vowel classes recognized by the proposed model are more consistent with the ground truth proportions, with an average absolute difference of 0.23 pp. While the baseline shows a much higher average absolute difference of 0.35 pp (refer to Appendix G.1.1 for more details). This indicates that the proposed model exhibits superior performance in recognizing vowels. Next, we conduct separate analyses of the errors made by both models in recognizing vowels and non-vowels. To be specific, given the recognition result of each model for a test sample, i.e., a sequence of recognized phonemes, we extract all the vowels/non-vowels from it and create a new sequence by combining the extracted phonemes with the original order preserved. This allows us to formulate recognized vowel/non-vowel sequences. For example, we can obtain a vowel sequence \([\text{/i}/, \text{/a}/, \text{/i}/, \text{/æ}/, \text{/i}/, \text{/u}/, \text{/ux}/, \text{/iy}/, \text{/ux}/]\) from \([\text{/w}/, \text{/i}/, \text{/dcl}/, \text{/s}/, \text{/ah}/, \text{/tcl}/, \text{/ch}/, \text{/ixl}/, \text{/n}/, \text{/æl}/, \text{/kcl}/, \text{/t}/, \text{/ixl}/, \text{/v}/, \text{/tl}/, \text{/f}/, \text{/ly}/, \text{/ux}/, \text{/zh}/, \text{/el}/, \text{/bcl}/, \text{/bl}/, \text{/iy}/, \text{/yl}/, \text{/ux}/, \text{/sl}/, \text{/fl}/, \text{/el}/]. The ground truth vowel/non-vowel sequences can be derived from the reference target sequence in the same way. To approximate the errors made by each model in recognizing vowels/non-vowels, we calculate the edit distance (Navarro, 2001) between the recognized vowel/non-vowel sequence and the corresponding ground truth counterpart for all test samples. Fig. 7 illustrates the distributions of edit distances between the recognized sequences and the ground truth sequences for all test samples. In Fig. 7 (a), which pertains to the performances of the two models in recognizing vowels, it is evident that the proposed model outperforms the baseline. The distribution of edit distances for the proposed model is significantly skewed towards the left, compared to that for the baseline, with the average edit distance for the proposed model (3.6488) much smaller than that for the baseline (4.2238). While for the performances of the two models in recognizing non-vowels as depicted in Fig. 7 (b), the proposed model only shows a slight improvement over the baseline. In this case, the distributions of errors made by the two models are closer compared to the case of recognizing vowels, where the average edit distances for the two models are 3.9435 and 4.2030, respectively. When the biologically inspired relational thinking process is incorporated, the model’s performance in recognizing vowels shows a more noticeable improvement compared to its performance in recognizing non-vowels. This finding also coincides with the results of a speech intelligibility test conducted with human listeners, as reported in Meyer et al. (2006). The test results suggested that vowel identification is a relatively easier task for humans compared to consonant identification. Additional analyses related to the phoneme recognition tasks can be referred to Appendix G.1. 5.4 Speech Recognition with Proposed Framework The proposed spetro-temporal relational thinking modeling is further validated in speech recognition tasks and evaluated using word error rate (WER), aiming to demonstrate the generalizability of our proposed framework to other prevalent tasks. A word-level relational thinking model, built upon the proposed framework (as detailed in Appendix G.3), displays a 2.55% reduction in WER against the wav2vec2 baseline (Baevski et al., 2020) when language model is not applied. The incorporation of a 4-gram language model increases this reduction in WER to 3.23% (refer to Table 6 for details). These improvements imply that comprehending and utilizing the spectro-temporal relations associated with words also advantages the downstream speech recognition tasks, in that certain words tend to coherently and frequently appear together, such as “I am”. 6 Conclusion We propose a novel spectro-temporal relational thinking based acoustic modeling framework, where its core module is inspired by a fundamental human learning process. This framework is capable of capturing a unique form of pair-wise information, distinct from the assessment of individual nodes as performed by attention mechanism. Models constructed using this framework show state-of-the-art performance in phoneme recognition tasks. Further analysis conveys connections between the captured relations and phoneme groups, where the patterns involved in the relations exhibit more similarities for phoneme classes within the same group, while showing significant variations between phoneme classes from different groups. Our analysis also reveals that relational thinking modeling primarily enhances the model’s ability to recognize vowels. Additionally, we demonstrate the generalizability of the proposed framework by applying other types of acoustic features and employing it for different downstream tasks, where relational thinking modeling consistently benefits downstream tasks. This study aims to pave a new pathway for integrating biologically inspired human learning processes into deep learning approaches, improving the model’s capability in speech recognition and potentially its interpretability. REFERENCES Ossama Abdel-Hamid, Abdel-rahman Mohamed, Hui Jiang, Li Deng, Gerald Penn, and Dong Yu. Convolutional neural networks for speech recognition. *IEEE/ACM Transactions on audio, speech, and language processing*, 22(10):1533–1545, 2014. Patricia A Alexander. Relational thinking and relational reasoning: harnessing the power of patterning. *NPJ science of learning*, 1(1):1–7, 2016. Alexei Baevski, Steffen Schneider, and Michael Auli. vq-wav2vec: Self-supervised learning of discrete speech representations. *arXiv preprint arXiv:1910.05453*, 2019. Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. wav2vec 2.0: A framework for self-supervised learning of speech representations. *Advances in Neural Information Processing Systems*, 33:12449–12460, 2020. Parviz Birjandi and Somayyeh Sabah. A review of the language-thought debate: Multivariant perspectives. *BRAIN. Broad Research in Artificial Intelligence and Neuroscience*, 3(1):50–62, 2012. Thomas Bohnstingl, Ayush Garg, Stanisław Woźniak, George Saon, Evangelos Eleftheriou, and Angeliki Pantazi. Speech recognition using biologically-inspired neural networks. In *ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*, pp. 6992–6996. IEEE, 2022. William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In *2016 IEEE international conference on acoustics, speech and signal processing (ICASSP)*, pp. 4960–4964. IEEE, 2016. Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. Attention-based models for speech recognition. *Advances in neural information processing systems*, 28, 2015. Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. A recurrent latent variable model for sequential data. *Advances in neural information processing systems*, 28, 2015. George E Dahl, Dong Yu, Li Deng, and Alex Acero. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. *IEEE Transactions on audio, speech, and language processing*, 20(1):30–42, 2011. Linhao Dong and Bo Xu. Cif: Continuous integrate-and-fire for end-to-end speech recognition. In *ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*, pp. 6079–6083. IEEE, 2020. Sorin Dusan and Larry R Rabiner. On integrating insights from human speech perception into automatic speech recognition. In *Ninth European Conference on Speech Communication and Technology*, 2005. John S Garofolo, Lori F Lamel, William M Fisher, Jonathan G Fiscus, and David S Pallett. Darpa timit acoustic-phonetic continous speech corpus cd-rom. nist speech disc 1-1.1. *NASA STI/Recon technical report n*, 93:27403, 1993. Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In *Proceedings of the 23rd international conference on Machine learning*, pp. 369–376, 2006. Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, et al. Conformer: Convolution-augmented transformer for speech recognition. 2020. Xiaodong He, Li Deng, and Wu Chou. Discriminative learning in sequential pattern recognition. *IEEE Signal Processing Magazine*, 25(5):14–36, 2008.
IhD1rBHhDy
The paper assumes that chemical structures with similar functionality should cluster close to each other in the structural space based on molecular fingerprints; however, this doesn’t necessarily have to be the case - you can have stereoisomers with different functional properties, and you can have chemical compounds with different chemical structure and similar labels. Some of this is visible in the cluster analyses, where molecules belonging to the same functional class are not clustered together. Perhaps a different metric to assess congruence of the two spaces is needed.
MINING PATENTS WITH LARGE LANGUAGE MODELS ELUCIDATES THE CHEMICAL FUNCTION LANDSCAPE Anonymous authors Paper under double-blind review ABSTRACT The fundamental goal of small molecule discovery is to generate chemicals with target functionality. While this often proceeds through structure-based methods, we set out to investigate the practicality of orthogonal methods that leverage the extensive corpus of chemical literature. We hypothesize that a sufficiently large text-derived chemical function dataset would mirror the actual landscape of chemical functionality. Such a landscape would implicitly capture complex physical and biological interactions given that chemical function arises from both a molecule’s structure and its interacting partners. To evaluate this hypothesis, we built a Chemical Function (CheF) dataset of patent-derived functional labels. This dataset, comprising 631K molecule-function pairs, was created using an LLM- and embedding-based method to obtain functional labels for approximately 100K molecules from their corresponding 188K unique patents. We carry out a series of analyses demonstrating that the CheF dataset contains a semantically coherent textual representation of the functional landscape congruent with chemical structural relationships, thus approximating the actual chemical function landscape. We then demonstrate that this text-based functional landscape can be leveraged to identify drugs with target functionality using a model able to predict functional profiles from structure alone. We believe that functional label-guided molecular discovery may serve as an orthogonal approach to traditional structure-based methods in the pursuit of designing novel functional molecules. 1 INTRODUCTION The overarching goal of drug discovery is to generate chemicals with specific functionality through the design of chemical structure (Li & Kang, 2020). Functionality, often in the context of drug discovery, refers to the specific effects a chemical exhibits on biological systems (i.e., vasodilator, analgesic, protease inhibitor), but it is applicable to materials as well (i.e., electroluminescent polymer). Computational methods often approach molecular discovery through structural and empirical methods such as protein-ligand docking, receptor binding affinity prediction, and pharmacophore design (Corso et al., 2022; Trott & Olson, 2010; Wu et al., 2018; Yang, 2010). These methods are powerful for designing molecules that bind to specific protein targets, but at present they are unable to explicitly design for specific organism-wide effects. This is largely because biological complexity increases with scale, and many whole-body effects are only weakly associated with specific protein inhibition or biomolecular treatment (Drachman, 2014). Humans have long been documenting chemicals and their effects, and it is reasonable to assume functional relationships are embedded in language itself. Text-based functional analysis has been paramount for our understanding of the genome through Gene Ontology terms (Consortium, 2004). Despite its potential, text-based functional analysis for chemicals has been largely underexplored. This is in part due to the lack of high-quality chemical function datasets but is more fundamentally due to the high multi-functionality of molecules, which is less problematic for genes and proteins. High-quality chemical function datasets have been challenging to generate due to the sparsity and irregularity of functional information in chemical descriptions, patents, and literature. Recent efforts at creating such datasets tend to involve consolidation of existing curated descriptive datasets (Wishart et al., 2023; Degtyarenko et al., 2007). Similarly, keyword-based function extraction partially solves the function extraction problem by confining its scope to singular predetermined functionality, but it fails at broadly extracting all relevant functions for a given molecule (Subramanian Given their profound success in text summarization, Large Language Models (LLMs) may be ideal candidates to broadly extract functional information of molecules from patents and literature, a task that remains unsolved (Brown et al., 2020; OpenAI, 2023; Touvron et al., 2023). This is especially promising for making use of the chemical patent literature, an abundant and highly specific source of implicit chemical knowledge that has been largely inaccessible due to excessive legal terminology (Senger, 2017; Ashenden et al., 2017). This may allow for the creation of a large-scale dataset that effectively captures the text-based chemical function landscape. We hypothesize that a sufficiently large chemical function dataset would contain a text-based chemical function landscape congruent with chemical structure space, effectively approximating the actual chemical function landscape. Such a landscape would implicitly capture complex physical and biological interactions given that chemical function arises from both a molecule’s structure and its interacting partners (Martin et al., 2002). This hypothesis is further based on the observation that function is reported frequently enough in patents and scientific articles for most functional relationships to be contained in the corpus of chemical literature (Papadatos et al., 2016). To evaluate this hypothesis, we set out to create a Chemical Function (CheF) dataset of patent-derived functional labels. This dataset, comprising 631K molecule-function pairs, was created using an LLM- and embedding-based method to obtain functional labels for approximately 100K molecules from their corresponding 188K unique patents. The CheF dataset was found to be of high quality, demonstrating the effectiveness of LLMs for extracting functional information from chemical patents despite not being explicitly trained to do so. Using this dataset, we carry out a series of experiments alluding to the notion that the CheF dataset contains a text-based functional landscape that simulates the actual chemical function landscape due to its congruence with chemical structure space. We then demonstrate that this text-based functional landscape can be harnessed to identify drugs with target functionality using a model able to predict functional profiles from structure alone. We believe that functional label-guided molecular discovery may serve as an orthogonal approach to traditional structure-based methods in the pursuit of designing novel functional molecules. 2 RELATED WORK Labeled chemical datasets. Chemicals are complex interacting entities, and there are many labels that can be associated with a given chemical. One class is specific protein binding, commonly used to train chemical representation models (Mysinger et al., 2012; Wu et al., 2018). Datasets linking chemicals to their functionality have emerged in recent years (Edwards et al., 2021; Huang et al., 2023; Degtyarenko et al., 2007; Wishart et al., 2023). These datasets were largely compiled from existing databases of well-studied chemicals, limiting their generalizability (Li et al., 2016; Fu et al., 2015). The CheF dataset developed here aims to improve upon these existing datasets by automatically sourcing molecular function from patents to create a high-quality molecular function dataset, ultimately capable of scaling to the entire SureChEMBL database of 32M+ patent-associated molecules (Papadatos et al., 2016). To our knowledge, the full scale-up would create not just the largest chemical function dataset, but rather the largest labeled chemical dataset of any kind. Its high coverage of chemical space means that the CheF dataset, in its current and future iterations, may serve as a benchmark for the global evaluation of chemical representation models. Patent-based molecular data mining and prediction. Building chemical datasets often involves extracting chemical identities, reaction schemes, quantitative drug properties, and chemical-disease relationships (Senger et al., 2015; Papadatos et al., 2016; He et al., 2021; Sun et al., 2021; Magariños et al., 2023; Zhai et al., 2021; Li et al., 2016). We recently used an LLM to extract patent-derived information to help evaluate functional relevance of results from a machine learning-based chemical similarity search (Anonymous et al., 2023). We expand upon previous works through the large-scale LLM-based extraction of broad chemical functionality from a corpus of patent literature. This is a task that LLMs were not explicitly trained to do, and we provide validation results for this approach. Recent work also focused on molecular generation from chemical subspaces derived from patents containing specific functional keywords, for example, all molecules relating to tyrosine kinase inhibitor activity (Subramanian et al., 2023). This allows for a model that can generate potential tyrosine kinase inhibitors but would need to be retrained to predict molecules of a different functional label. In our work, we focus on label classification rather than molecular generation. Further, we integrate multiple functional labels for any given molecule, allowing us to broadly infer molecular functionality given structure. Generative models could be trained on the described dataset, allowing for label-guided molecular generation without re-training for each label. **Chemical-to-textual translation.** Recent work investigated the translation of molecules to descriptive definitions and vice versa (Edwards et al., 2021; 2022; Su et al., 2022). The translation between language and chemical representations is promising as it utilizes chemical relationships implicit in text descriptions. However, decoder-based molecule-text translation models appear to us unlikely to be utilized for novel drug discovery tasks as experimentalists desire strongly deterministic results, reported prediction confidences, and alternative prediction hypotheses. To satisfy these constraints, we opted for a discriminative structure-to-function model. Many existing chemical-to-text translation models have been trained on datasets containing structural nomenclature and irrelevant words mixed with desirable functional information (Edwards et al., 2021; Degtyarenko et al., 2007). Inclusion of structural nomenclature causes inflated prediction metrics for functional annotation or molecular generation tasks, as structure-to-name and name-to-structure is simpler than structure-to-function and function-to-structure. The irrelevant words may cause artifacts during the decoding process depending on the prompt, skewing results in ways irrelevant to the task. In our work, we ensured our model utilized only chemical structure, and not structural nomenclature, when predicting molecular function to avoid data leakage. ### 3 RESULTS Patents are an abundant source of highly specific chemical knowledge. It is plausible that a large dataset of patent-derived molecular function would capture most known functional relationships and could approximate the chemical function landscape. High-fidelity approximation of the chemical function landscape would implicitly capture complex physical and biological interactions given that chemical function arises from both a molecule’s structure and its interacting partners. This would allow for the prediction of functional labels for chemicals which is, to our knowledge, a novel task. ![Diagram](image) (a) Label creation (b) Label cleaning Figure 1: **Chemical function dataset creation.** (a) LLM extracts molecular functional information present in patents into brief labels. Example shown in Figure S2. (b) Chemical functional labels were cleaned with algorithmic-, embedding-, and LLM-based methods. **Chemical function dataset creation.** We set out to create a large-scale database of chemicals and their patent-derived molecular functionality. To do so, a random 100K molecules and their associated patents were chosen from the SureChEMBL database to create a Chemical Function (CheF) dataset (Fig. S1) (Papadatos et al., 2016). To ensure that patents were highly relevant to their respective molecule, only molecules with fewer than 10 patents were included in the random selection, reducing the number of available molecules by 12%. This was done to exclude over-patented molecules like penicillin with over 40,000 patents, most of which are irrelevant to its functionality. For each molecule-associated patent in the CheF dataset, the patent title, abstract, and description were scraped from Google Scholar and cleaned. ChatGPT (gpt-3.5-turbo) was used to generate 1–3 functional labels describing the patented molecule given its unstructured patent data (Fig. 1a). The LLM-assisted function extraction method’s success was validated manually across 1,738 labels generated from a random 200 CheF molecules. Of these labels, 99.6% had correct syntax and 99.8% were relevant to their respective patent (Table S1). 77.9% of the labels directly described the labeled molecule’s function. However, this increased to 98.2% when considering the function of the primary patented molecule, of which the labeled molecule is an intermediate (Table S1). The LLM-assisted method resulted in 104,607 functional labels for the 100K molecules. These were too many labels to yield any predictive power, so measures were taken to consolidate these labels into a concise vocabulary. The labels were cleaned, reducing the number of labels to 39,854, and further consolidated by embedding each label with a language model (OpenAI’s textembedding-ada-002) to group grammatically dissimilar yet semantically similar labels together. The embeddings were clustered with DBSCAN using a cutoff that minimized the number of clusters without cluster quality deterioration (e.g., avoiding the grouping of antiviral, antibacterial, and antifungal) (Fig. S4). Each cluster was summarized with ChatGPT to obtain a single representative cluster label. The embedding-based clustering and summarization process was validated across the 500 largest clusters. Of these, 99.2% contained semantically common elements and 97.6% of the cluster summarizations were accurate and representative of their constituent labels (Table S2). These labels were mapped back to the CheF dataset, resulting in 19,616 labels (Fig. 1b). To ensure adequate predictive power, labels appearing in less than 50 molecules were dropped. The final CheF dataset consisted of 99,454 molecules and their 1,543 descriptive functional labels (Fig. 1, Table S3). Functional labels map to natural clusters in chemical structure space. Molecular function nominally arises directly from structure, and thus any successful dataset of functional labels should cluster in structural space. This hypothesis was based in part on the observation that chemical function is often retained despite minor structural modifications (Maggiora et al., 2014; Patterson et al., 1996). And due to molecules and their derivatives frequently being patented together, structurally similar molecules should be annotated with similar patent-derived functions. This rationale generally holds, but exceptions include stereoisomers with different functions (e.g. as for thalidomide) and distinct structures sharing the same function (e.g. as for beta-lactam antibiotics and tetracyclines). To evaluate this hypothesis, we embedded the CheF dataset in structure space by converting the molecules to molecular fingerprints (binary vectors representing a molecule’s substructures), visualized with t-distributed Stochastic Neighbor Embedding (t-SNE) (Fig. 2). Then, to determine if the CheF functional labels clustered in this structural space, the maximum fingerprint Tanimoto similarity was computed between the fingerprint vectors of each molecule containing a given label; this approach provides a measure of structural similarity between molecules that have the same functional label (Fig. 2) (Bajusz et al., 2015). This value was compared to the maximum similarity computed from a random equal-sized set of molecules to determine significance. Remarkably, 1,192 of the 1,543 labels were found to cluster significantly in structural space (independent t-tests per label, false-discovery rate of 5%). To give an idea of the meaning of this correlation, inherent clustering was visualized for the labels ‘hcv’ (hepatitis C virus), ‘electroluminescence’, ‘serotonin’, and ‘5-hf’ (5-hydroxytryptamine, the chemical name for serotonin) (Fig. 2). For the label ‘electroluminescence’ there was one large cluster containing almost only highly conjugated molecules (Fig. 2c). For ‘hcv’, there were multiple distinct communities representing antivirals targeting different mechanisms of HCV replication. Clusters were observed for NS5A inhibitors, NS3 macrocyclic and peptidomimetic protease inhibitors, and nucleoside NS5B polymerase inhibitors (Fig. 2a, S5). The observed clustering of functional labels in structure space provided evidence that the CheF dataset labels had accurately captured structure-function relationships, validating our initial hypothesis. Label co-occurrences reveal the text-based chemical function landscape. Patents contain joint contextual information on the application, structure, and mechanism of a given compound. We attempted to determine the extent to which the CheF dataset implicitly captured this joint semantic context by assessing the graph of co-occurring functional labels (Fig. 3). Each node in the graph represents a CheF functional label, and their relative positioning indicates the frequency of co-occurrence between labels, with labels that co-occur more frequently placed closer together. To prevent the visual overrepresentation of extremely common labels (i.e., inhibitor, cancer, kinase), each node’s size was scaled based on its connectivity instead of the frequency of co-occurrence. Modularity-based community detection isolates tightly interconnected groups within a graph, distinguishing them from the rest of the graph. This method was applied to the label co-occurrence graph, with the resulting clusters summarized with GPT-4 into representative labels for unbiased semantic categorization (Table S4, S5, S6). The authors curated the summarized labels for validity and found them representative of the constituent labels; these were then further consolidated for succinct representation of the semantic categorization (Table S4). This revealed a semantic structure in the co-occurrence graph, where distinct communities such as ‘Electronic, Photochemical, & Stability’ and ‘Antiviral & Cancer’ could be observed (Fig. 3, Tables S4, S5, S6). Within communities, the fine-grained semantic structure also appeared to be coherent. For example, in the local neighborhood around ‘hcv’ the labels ‘antiviral’, ‘ns’ (nonstructural), ‘hbv’ (hepatitis B virus), ‘hepatitis’, ‘replication’, and ‘protease’ were found, all of which are known to be semantically relevant to hepatitis C virus (Fig. 1). The graph of patent-derived molecular functions is a visual representation of the text-based chemical function landscape, and represents a potentially valuable resource for linguistic evaluation of chemical function and ultimately drug discovery. Coherence of the text-based chemical function landscape in chemical structure space. To assess how well text-based functional relationships align with structural relationships, the overlap between the molecules of a given label and those of its 10 most commonly co-occurring labels was calculated (Fig. 4). This was achieved by computing the maximum fingerprint Tanimoto similarity from each molecule containing a given label to each molecule containing any of the 10 most commonly co-occurring labels (with <1,000 total abundance). This value was compared to the maximum similarity computed from each molecule containing a given label to a random equal-sized set of molecules to determine significance. This comparison indicated that molecules containing the 10 most commonly co-occurring labels were closer to the given label’s molecules in structure space than a random set for 1,540 of the 1,543 labels (independent t-tests per label, false-discovery rate... Figure 3: Label co-occurrences reveal the text-based chemical function landscape. Node sizes correspond to number of connections, and edge sizes correspond to co-occurrence frequency in the CheF dataset. Modularity-based community detection was used to obtain 19 distinct communities. The communities broadly coincided with the semantic meaning of the contained labels, the largest 10 of which were summarized to representative categorical labels (Tables S4, S5, S6). of 5%), meaning that text-based functional relationships align with structural relationships (Fig. 4). With the discovery of semantically structured communities, above, this suggests that users can move between labels to identify new compounds and vice versa to assess a compound’s function. Functional label-guided drug discovery. To employ the text-based chemical function landscape for drug discovery, multi-label classification models were trained on CheF to predict functional labels from molecular fingerprints (Table S7). The best performing model was a logistic regression model on molecular fingerprints with positive predictive power for 1,532/1,543 labels and >0.90 ROC-AUC for 458/1,543 labels (Fig. 5a). This model can thus be used to comprehensively annotate chemical function, even when existing annotations are fragmented or incomplete. As an example, for a known hepatitis C antiviral the model strongly predicted ‘antiviral’, ‘hcv’, ‘ns’ (nonstructural) (94%, 93%, 70% respectively) while predicting ‘protease’ and ‘polymerase’ with low confidence (0.02%, 0.00% respectively) (Fig. 5b). The low-confidence ‘protease’ and ‘polymerase’ predictions suggested that the likely target of this drug was the nonstructural NS5A protein, rather than the NS2/3 proteases or NS5B polymerase, a hypothesis that has been validated outside of patents in the scientific literature (Ascher et al., 2014). The ability to comprehensively predict functional profiles allows for the discovery of new drugs. For example, the label ‘serotonin’ was used to query the test set predictions, and a ranked list of the 10 molecules most highly predicted for ‘serotonin’ were obtained (Fig. 5c). All ten of these were patented in relation to serotonin: 8 were serotonin receptor ligands (5-HT1, 5-HT2, 5-HT6) and 2 were serotonin reuptake inhibitors. Similarly, the synonymous label ‘5-ht’ was used as the query and the top 10 molecules were again obtained (Fig. 5d). Of these, seven were patented in relation to serotonin (5-HT1, 5-HT2, 5-HT6), four of which were also found in the aforementioned ‘serotonin’ search. The remaining three molecules were patented without reference to the serotonin receptor, but were instead patented for depressant, anti-anxiety, and memory dysfunction relieving effects, all of which have associations with serotonin and its receptor. The identification of known serotonin receptor ligands, together with the overlapping results across synonymous labels, provides an internal validation of the model. Additionally, these search results suggest experiments in which the “mispredicted” molecules may bind to serotonin receptors or otherwise be synergistic with the function of serotonin, thereby demonstrating the practical utility of moving with facility between chemicals and their functions. Figure 4: **Coherence of the text-based chemical function landscape in structure space.** To assess the alignment of text-based functional relationships with structural relationships, the max fingerprint Tanimoto similarity from each molecule containing a given label to each molecule containing any of its 10 most frequently co-occurring labels (<1,000 total abundance) was compared against the max fingerprint Tanimoto similarity to a random subset of molecules of the same size. (a) ‘hev’ neighboring labels’ molecules. (b) Degree of coincidence between ‘hcv’ and neighboring labels. (c) ‘electroluminescence’ neighboring labels’ molecules. (d) Degree of coincidence between ‘electroluminescence’ and neighboring labels. (e) ‘serotonin’ neighboring labels’ molecules. (f) Degree of coincidence between ‘serotonin’ and neighboring labels. (g) ‘5-ht’ neighboring labels’ molecules. (h) Degree of coincidence between ‘5-ht’ and neighboring labels. See Fig. S5 for more labels. To examine the best model’s capability in drug repurposing, functional labels were predicted for 3,242 Stage-4 FDA approved drugs (Fig. S7) [Ochoa et al., 2021]. Of the 16 drugs most highly predicted for ‘hcv’, 15 were approved Hepatitis C Virus (HCV) antivirals. Many of the mispredictions in the top 50 were directly relevant to HCV treatment including 8 antivirals and 8 polymerase inhibitors. The remaining mispredictions included 3 ACE inhibitors and 2 BTK inhibitors, both of which are peripherally associated with HCV through liver fibrosis mitigation and HCV reactivation, respectively [Corey et al., 2009; Mustafayev & Torres, 2022]. Beyond showing its power, this example suggests that functional label-guided drug discovery may serve as a useful paradigm for rapid antiviral repurposing to mitigate future pandemics. 4 DISCUSSION While *in silico* drug discovery often proceeds through structural and empirical methods such as protein-ligand docking, receptor binding affinity prediction, and pharmacophore design, we set out to investigate the practicality of orthogonal methods that leverage the extensive corpus of chemical literature. To do so, we developed an LLM- and embedding-based method to create a Chemical Function (CheF) dataset of 100K molecules and their 631K patent-derived functional labels. Over 78% of the functional labels corresponded to distinct clusters in chemical structure space, indicating congruence between chemical structures and individual text-derived functional labels. Moreover, there was a semantically coherent text-based chemical function landscape intrinsic to the dataset that was found to correspond with broad fields of functionality. Finally, it was found that the relationships in the text-based chemical function landscape mapped with high fidelity to chemical structure space (99.8% of labels), indicating approximation to the actual chemical function landscape. To leverage the chemical function landscape for drug discovery, several models were trained and benchmarked on the CheF dataset to predict functional labels from molecular fingerprints (Table. S7). The top-performing model was utilized for practical applications such as unveiling an undis- Figure 5: **Functional label-guided drug discovery.** (a) Test set results from best-performing model that predicts functional labels from molecular fingerprints. Labels sorted by ROC-AUC, showing every 20 labels for clarity. Black line indicates ROC-AUC random threshold. Average test ROC-AUC and PR-AUC were 0.84 and 0.20, respectively. (b) Model-based comprehensive annotation of chemical function. Shown is a test set molecule patented for hepatitis C antiviral treatment. The highly predicted ‘hecv’, ‘ns’, and ‘inhibitor’ with the low-predicted ‘protease’ and ‘polymerase’ can be used to infer that the drug acts on NS5A to inhibit HCV replication, revealing a mechanism undisclosed in the patent. (c-d) Functional label-based drug candidate identification, showcasing the top 10 test set molecules for ‘serotonin’ or ‘5-ht’; true positives in green, false positives in red. The false positives offer potential for drug discovery and repurposing, especially when considering these have patents for related neurological uses (i.e., anti-anxiety and memory dysfunction). closed drug mechanism, identifying novel drug candidates, and mining FDA-approved drugs for repurposing and combination therapy uses. Since the CheF dataset is scalable to the entire 32M+ molecule database, we anticipate that many of these predictions will only get better into the future. The CheF dataset inherently exhibits a bias towards patented molecules. This implies sparse representation of chemicals with high utility but low patentability, and allows for false functional relationships to arise from prophetic claims. Additionally, by restricting the dataset to chemicals with <10 patents, it neglects important well-studied molecules like Penicillin. The inclusion of over-patented chemicals could be accomplished by using only the most abundant k terms for a given molecule, using a fine-tuned LLM to only summarize patents relevant to molecular function (ignoring irrelevant patents on applications like medical devices), or employing other data sources like PubChem or PubMed to fill in these gaps. Increasing label quality and ignoring extraneous claims might be achieved through an LLM fine-tuned on high-quality examples. Further quality increases may result from integration of well-documented chemical-gene and chemical-disease relationships into CheF. The analysis herein suggests that a sufficiently large chemical function dataset contains a text-based function landscape that approximates the actual chemical function landscape. Further, we demonstrate one of the first examples of functional label-guided drug discovery, made possible utilizing state-of-the-art advances in machine learning. Models in this paradigm have the potential to automatically annotate chemical function, examine non-obvious features of drugs such as side effects, and down-select candidates for high-throughput screening. Moving between textual and physical spaces represents a promising paradigm for drug discovery in the age of machine learning. 5 METHODS Database creation. The SureChEMBL database was shuffled and converted to chiral RDKit-canonicalized SMILES strings, removing malformed strings (Weininger [1988], Papadatos et al. [2016], Landrum et al. [2013]). SMILES strings were converted to InChI keys and used to obtain PubChem CIDs (Kim et al. [2023]). To minimize costs and prevent label dilution, only molecules with fewer than 10 patents were included. This reduced the dataset from 32M to 28.2M molecules, a 12% decrease. A random 100K molecules were selected as the dataset. For each associated patent, the title, abstract, and description were scraped from Google Scholar and cleaned. The patent title, abstract, and first 3500 characters of the description were summarized into brief functional labels using ChatGPT (gpt-3.5-turbo) from July 15th, 2023, chosen for low cost and high speed. Cost per molecule was $0.005 using gpt-3.5-turbo. Responses from ChatGPT were converted into sets of labels and linked to their associated molecules. Summarizations were cleaned, split into individual words, converted to lowercase, and converted to singular if plural. The cleaned dataset resulted in 29,854 unique labels for 99,454 molecules. Fetching patent information and summarizing with ChatGPT, this method’s bottleneck, took 6 seconds per molecule with 16 CPUs in parallel. This could be sped up to 3.9 seconds by summarizing per patent rather than per-molecule to avoid redundant summarizations, and even further to 2.6 seconds by using only US and WO patents. To consolidate labels by semantic meaning, the vocabulary was embedded with OpenAI’s textembedding-ada-002 and clustered to group labels by embedding similarity. DBSCAN clustering was performed on the embeddings with a sweeping epsilon (Ester et al. [1996]). The authors chose the epsilon for optimal clustering, set to be at the minimum number of clusters without quality degradation (e.g., avoiding the merging of antiviral, antibacterial, and antifungal). The optimal epsilon was 0.34 for the dataset herein, consolidating down from 29,854 to 20,030 labels. Representative labels for each cluster were created using gpt-3.5-turbo. The labels from a very large cluster of only IUPAC structural terms were removed to reduce non-generalizable labels. Labels appearing in <50 molecules were dropped to ensure sufficient predictive power. This resulted in a 99,454-molecule dataset with 1,543 unique functional labels, deemed the Chemical Function (CheF) dataset. Text-based functional landscape graph. Per-molecule label co-occurrence was counted across CheF. Counts were used as edge weights between label nodes to create a graph, visualized in Gephi using force atlas, nooverlap, and label adjust methods (default parameters) (Bastian et al. [2009]). Modularity-based community detection with 0.5 resolution resulted in 19 communities. Coincidence of labels and their neighbors in structure space. The 100K molecular fingerprints were t-SNE projected using scikit-learn, setting the perplexity parameter to 500. Molecules were colored if they contained a given label, see [chefdb.app]. The max fingerprint Tanimoto similarity from each molecule containing a given label to each molecule containing any of the 10 most commonly co-occurring labels was computed. The null co-occurrence was calculated by computing the max similarity from each molecule containing a given label to a random equal-sized set. Significance for each label was computed with an independent 2-sided t-test. The computed P values were then subjected to a false-discovery-rate (FDR) correction and the labels with P < 0.05 after FDR correction were considered significantly clustered (Benjamini & Hochberg [1995]). Limiting max co-occurring label abundance to 1K molecules was necessary to avoid polluting the analysis, as hyper-abundant labels would force the Tanimoto similarity to 1.0. Model training. Several multi-label classification models were trained to predict the CheF from molecular representations. These models included logistic regression (C=0.001, max_iter=1000), random forest classifier (n_estimators=100, max_depth=10), and a feedforward neural network (BCEWithLogitsLoss, layer sizes (512, 256), 5 epochs, 0.2 dropout, batch size 32, learning rate 0.001; 5-fold CV to determine params). A random 10% test set was held out from all model training. Macro average and individual label ROC-AUC and PR-AUC were calculated. ETHICS STATEMENT Consideration of ML chemistry dual use often focuses on the identification of toxic chemicals and drugs of abuse. As patents typically describe the beneficial applications of molecules, it is unlikely that a model trained on CheF labels will be able to identify novel toxic compounds. Functional labels for the chemical weapons VX and mustard gas were predicted from our model, found to contain no obvious indications of malicious properties. On the contrary, drugs of abuse were more easily identifiable, as the development of neurological compounds remains a lucrative objective. 5-MeO-DMT, LSD, fentanyl, and morphine all had functional labels of their primary mechanism predicted with moderate confidence. However, benign molecules also predicted these same labels, indicating that it may be quite challenging to intentionally discover novel drugs of abuse using the methods contained herein. REPRODUCIBILITY STATEMENT The CheF dataset has been made publicly available under the MIT license at https://doi.org/10.5281/zenodo.8350193. An interactive visualization of the dataset can be found at chefdb.app. REFERENCES David B Ascher, Jerome Wielens, Tracy L Nero, Larissa Doughty, Craig J Morton, and Michael W Parker. Potent hepatitis c inhibitors bind directly to ns5a and reduce its affinity for rna. Scientific reports, 4(1):4765, 2014. Stephanie K Ashenden, Thierry Kogej, Ola Engkvist, and Andreas Bender. Innovation in small-molecule-druggable chemical space: Where are the initial modulators of new targets published? Journal of chemical information and modeling, 57(11):2741–2753, 2017. Dávid Bajusz, Anita Rácz, and Károly Héberger. Why is tanimoto index an appropriate choice for fingerprint-based similarity calculations? Journal of cheminformatics, 7(1):1–13, 2015. Mathieu Bastian, Sébastien Heymann, and Mathieu Jacomy. Gephi: an open source software for exploring and manipulating networks. In Proceedings of the international AAAI conference on web and social media, volume 3, pp. 361–362, 2009. Yoav Benjamini and Yosef Hochberg. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological), 57(1):289–300, 1995. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. Gene Ontology Consortium. The gene ontology (go) database and informatics resource. Nucleic acids research, 32(suppl_1):D258–D261, 2004. Kathleen E Corey, Nirali Shah, Joseph Misdraji, Barham K Abu Dayyeh, Hui Zheng, Atul K Bhan, and Raymond T Chung. The effect of angiotensin-blocking agents on liver fibrosis in patients with hepatitis c. Liver International, 29(5):748–753, 2009. Gabriele Corso, Hannes Stärk, Bowen Jing, Regina Barzilay, and Tommi Jaakkola. Diffdock: Diffusion steps, twists, and turns for molecular docking. arXiv preprint arXiv:2210.01776, 2022. Kirill Degtyarenko, Paula De Matos, Marcus Ennis, Janna Hastings, Martin Zbinden, Alan McNaught, Rafael Alcántara, Michael Darsow, Mickaël Guedj, and Michael Ashburner. Chebi: a database and ontology for chemical entities of biological interest. Nucleic acids research, 36(suppl_1):D344–D350, 2007. David A Drachman. The amyloid hypothesis, time to move on: Amyloid is the downstream result, not cause, of alzheimer’s disease. Alzheimer’s & Dementia, 10(3):372–380, 2014.
K7l94Z81bH
Given the heterogeneity of agents in your setup, how does RLD3 ensure fair allocation of rewards and prevent potential domination by certain groups or agents, which could lead to suboptimal overall system performance?
Sparsity-Aware Grouped Reinforcement Learning for Designated Driver Dispatch Anonymous authors Paper under double-blind review Abstract Designated driving service is a fast-growing market that provides drivers to transport customers in their own cars. The main technical challenge in this business is the design of driver dispatch due to slow driver movement and sparse orders. To address these challenges, this paper proposes Reinforcement Learning for Designated Driver Dispatch (RLD3). Our algorithm considers group-sharing structures and frequent rewards with heterogeneous costs to achieve a trade-off between heterogeneity, sparsity, and scalability. Additionally, our algorithm addresses long-term agent cross-effects through window-lasting policy ensembles. We also implement an environment simulator to train and evaluate our algorithm using real-world data. Extensive experiments demonstrate that our algorithm achieves superior performance compared to existing Deep Reinforcement Learning (DRL) and optimization methods. 1 Introduction Designated driving, also known as chauffeur service and substitute driving, is an emerging business in the field of mobility service platforms. These platforms offer professional drivers to transport customers who are unable to drive, such as drunk drivers, rookie drivers, and tired drivers. The designated driver arrives with an electric scooter and drives the customer to their destination, as shown in Figure 1. The platform controller manages dispatching behaviors to improve customers’ experience and drivers’ income. Designated driving has become a significant and promising industry, with a market size of over 4 billion in China (BusinessGrowthReport, 2022). One of the critical challenges in this industry is the design of driver dispatch, also known as the fleet management problem. While typical ride-hailing platforms focus on improving the matching quality between drivers and customers, designated driving platforms still struggle to find a driver for each order. This is due to the sparsity of designated drivers and their slow movement. Besides, designated orders have “hub-and-spoke” structures, with origins concentrated in specific hotspots (e.g., bars, restaurants) and destinations primarily being residential areas, which often result in drivers being far away from potential customers. Optimization methods are commonly used to address fleet management problems (Zhang et al., 2017; Robbenolt & Levin, 2023), but they require a certain level of modeling for the supply and demand dynamics, which is complex in the real world. Recently, many Deep Reinforcement Learning (DRL) approaches have been proposed to solve fleet management problems in ride-hailing services (Oda & Joe-Wong, 2018; Al-Kanj et al., 2020; Zhang et al., 2020; Liu et al., 2020; Shou & Di, 2020; Qin et al., 2021; Eshkevari et al., 2022; Liu et al., 2022; Zheng et al., 2022). However, designated drivers present unique challenges compared to traditional taxi ride-hailing systems. The challenges stem mainly from the sparsity, which can be attributed to three key factors. Firstly, the dataset itself exhibits sparsity. In the case of designated driving, the number of drivers is considerably smaller compared to taxi drivers, resulting in a sparser spatial-temporal distribution. To illustrate, our dataset collected from Hangzhou, a Chinese city with a population of approximately 10 million, has only around 3,000 designated drivers and nearly 13,000 order requests per day. Secondly, individual drivers experience sparse feedback on the direct matching of orders. As designated drivers move slowly and are often located far away from available orders, each driver, on average, completes only 3 to 4 orders per day. Additionally, after matching with an order, the driver also spends a significant amount of time on the way to pick up the client. Thirdly, the cross-effect of agents is sparse and long-lasting. This is due to the slow and continuous impact of driver movements on their distribution, which is crucial in fleet management. Before each driver is matched with an order, they typically engage in continuous movement for several quarters. Therefore, considering the lasting impact becomes more crucial than focusing solely on the transient movements of other agents. Moreover, the heterogeneity and scalability of agents pose additional challenges for traditional MARL algorithms. Factors such as varying speeds and mileage limitations among different drivers, as well as the fluctuating number of drivers commuting to work each day, further contribute to these challenges. To address these challenges, this paper proposes a group-sharing window-lasting Reinforcement Learning framework for Designated Driver Dispatch problems, RLD3. We model the problem as a Decentralized Partially Observed Markov Decision Process (Dec-POMDP), capturing the fact that drivers usually have local observations. RLD3 incorporates several novel designs. Firstly, we introduce a group-sharing structure, where agents are classified into several groups. Agents within the same group share the same network parameters and experience data. This design strikes a balance among sparsity, heterogeneity, and scalability. Secondly, we design a reward structure for the DRL algorithm. This specially designed reward estimates the potential of the neighborhood around the driver by considering the distances of all unmatched orders in that area, addressing the issue of sparse feedback. It also incorporates complicated movement constraints by applying heterogeneous moving costs. Thirdly, we design a time window to calculate the cumulative actions of agents during consecutive execution periods, allowing estimation of other agents’ policies and making it suitable for sparse and lasting multi-agent interactions. Finally, we implement an environment simulator using real-world designated driving datasets and conduct extensive experiments to train and evaluate different algorithms. The results demonstrate that RLD3 outperforms existing DRL benchmarks and optimization policies in terms of completed order numbers and adherence to moving constraints. The main contributions of this paper are summarized as follows: i) We are the first to formulate a general Dec-POMDP framework for designated driver dispatch problems in designated driving markets. ii) We propose a novel MARL algorithm, RLD3, to address the challenges of designated driver dispatch and achieve trade-off among scalability, heterogeneity, and sparsity. This algorithm builds upon group-sharing structures and window-lasting agent interactions with a potential/cost-aware reward. iii) We design a designated driving simulator using real-world datasets and conduct extensive experiments. The results show that RLD3 efficiently learns system dynamics and outperforms existing DRL and optimization methods. 2 RELATED WORK Driver Dispatch. As mentioned in Section 1, the driver dispatch problem has been extensively investigated in the existing literature. Two prominent methodologies have garnered significant attention: optimization algorithms (Zhang et al., 2016; Robbennolt & Levin, 2023) and DRL-based algorithms (Oda & Joe-Wong, 2018; Al-Kani et al., 2020; Zhang et al., 2020; Liu et al., 2020; Shou & Di, 2020; Qin et al., 2021; Eshkevari et al., 2022; Liu et al., 2022; Zheng et al., 2022). Optimization algorithms leverage historical driver and order distributions to formulate dispatch policies, but they require precise knowledge of demand-supply dynamics, which is challenging to obtain in the real world. DRL-based algorithms are powerful in solving driver dispatch problems as they can learn a parametric model without relying on strong problem-based assumptions and can optimize long-term effects through sequential decision-making. However, taxi drivers move at a faster speed, and taxi orders are much denser and more balanced. These features significantly reduce the sparsity challenges faced by traditional DRL-based dispatch algorithms. Thus, it is difficult to directly transfer the models and algorithms to the designated driving platform. **Reinforcement Learning.** Reinforcement learning (RL) techniques have shown promise in addressing complex multi-agent problems. The Multi-Agent Deep Deterministic Policy Gradient algorithm (MADDPG) (Lowe et al., 2017) extends the Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2016) and Deterministic Policy Gradient algorithms (Silver et al., 2014) by using deep neural networks to approximate action values and handle agent interactions. Such algorithms within the traditional CTDE paradigm (Claus & Boutilier, 1998) often allow agents to achieve good overall performance by utilizing heterogeneous strategies. However, due to the independent nature of each agent’s policy, they encounter the challenge of sparse feedback in the designated driving problem, leading to lower efficiency in exploration and policy learning. To address sparsity, Random Network Distillation (RND) (Burda et al., 2019) uses an additional value function to estimate intrinsic reward in order to enhance exploration. In the designated driving platform, due to the unique “hub-and-spoke” structure of orders, the hotspots of orders are more concentrated. Exploring non-semantic information would result in excessive driver movement costs. Curriculum Learning approaches, such as Curriculum Deep Reinforcement Learning (Hacohen & Weinshall, 2019) and Relevant Curriculum Reinforcement Learning (Flet-Berliac & Preux, 2020), help in learning from sparse feedback by planning the neural network’s learning path. However, planning learning paths in multi-agent scenarios is challenging due to the complex dynamics of cooperation and competition among drivers. Mean-Field Reinforcement Learning (MFRL) techniques, such as Mean Field Multi-Agent Reinforcement Learning (MFMARL) (Yang et al., 2018) and Multi-Agent Mean Field Q-Learning (Ganapathi Subramanian et al., 2020), model agent interactions as the interaction between a single agent and a field effect. Mean-field methods can address the issue of sparse agent distributions but lack consideration for the lasting interaction of different drivers, which should be taken into account since designated drivers have slow movement and complex constraints. To address scalability and heterogeneity, Hierarchical Reinforcement Learning (HRL) approaches, such as Feudal HRL (Vezhnevets et al., 2017), Data-Efficient HRL (Nachum et al., 2018), and Model-Free HRL (Rafati & Noelle, 2019), decompose large-scale problems into sub-agents. But in the context of designated driver dispatch, additional attention should be paid to the complex interactions among agents and various sparsity issues as mentioned before. ### 3 RLD3: Reinforcement Learning for Designated Driver Dispatch In this section, we present the formulation of the Decentralized Partially Observed Markov Decision Process (Dec-POMDP) for the designated driver dispatch problem. We introduce three unique designs in our algorithm: the grouped structure, the potential reward, and the lasting agent interaction. #### 3.1 Formulation We consider the designated driving service in one metropolis. Each day, there are $N$ drivers with random initialization. Orders appear in the system at specific times and locations. Unmatched orders have limited patience and will be canceled after a waiting period following a Poisson distribution. Drivers that have completed their corresponding orders leave the system after off-duty time. For simplicity, we assume that time in the system is slotted, with each time step corresponding to 30 seconds. At each time step, the platform decides the dispatch movement for every idling driver. We assume that drivers fully comply with movement instructions. The statuses of drivers and orders are updated until the next time step due to matches between idling drivers and unmatched orders, as well as the generation/completion processes. The Dec-POMDP formulation $\langle N, S, \emptyset, A, P, R, \gamma \rangle$ is presented as follows: **Agent** $i \in [N]$: Each driver is considered an agent, resulting in a total of $N$ unique agents. The platform can only dispatch idling drivers, as each agent can be in one of three statuses: offline, idle, or serving orders at any given time $t$. State \( s \in S \): At each time \( t \), a global state is maintained, taking into account the status of all drivers and orders. This includes coordinates, moving distance, working status, serving targets, and moving targets for drivers. The state also includes calling time, patience, origin, destination, and serving status for orders. Observation \( s \mapsto o_i \in O \): Drivers have partial observations of the state \( s \). In our implementation, each agent’s observation is represented by a 22-dimensional vector: \[ ([\#\text{order}], [\#\text{driver}], [\min \text{dist}], t, \text{lat}, \text{lng}, \text{move}), \] where the first three terms denote the number of orders to be matched, the number of idling drivers, and the distance to the closest order in six-segment-direction neighborhoods as shown in Figure 12. The last four terms represent time, latitude, longitude, and the distance the driver has already moved. Action \( a_1 \times \cdots \times a_N \in A \): The platform proposes a joint action instructing the movement policy for all available drivers based on their observations \( o_t \) at time \( t \). The action space for an individual agent consists of seven discrete actions including six neighboring directions and staying at the current location as shown in Figure 11. Agents located at the boundary and corners have a smaller action space. State Transition \( P : s \times a[N] \mapsto s' \): The movement of drivers, along with order updates and matches between drivers and orders, induces state transitions in the environment. Reward \( r_i \in \mathbb{R} \): After executing an action, each agent receives its distinct instant reward \( r_i \). The instant reward \( r_i^t \) is defined as the sum of the immediate match reward, neighborhood potential reward, and move cost: \[ r_i^t = mt_i^t + nb_i^t + mv_i^t. \] Immediate match reward \( mt_i^t \) directly relates to the gross merchandise volume of the platform, which is the objective of our algorithm. To optimize volume without using discriminatory personal information, the immediate match reward is set to a fixed number: \[ mt_i^t = \begin{cases} 50, & \text{if agent } i \text{ is matched with an order at } t; \\ 0, & \text{otherwise.} \end{cases} \] The move cost and neighborhood potential reward will be introduced in Section 3.2 and 3.3. 3.2 Towards Dataset Sparsity through Group Sharing We introduce the concept of group sharing to address dataset sparsity issues in our approach. Meanwhile, we estimate the influence between these groups using the mean-field effect to ensure heterogeneity and scalability. In real-world scenarios, drivers can be classified into several types based on their cost conditions. These endogenously heterogeneous agents are naturally mediated into several groups. Agents within the group share the same network along with their experience data in the training process. Specifically, we divide the \( N \) agents into \( M \) classes, where \( M \) is a fixed number. To control grouped drivers’ moving distance, we include move cost \( mv_i^t \) as a regularizer that influences the behavior of agents in the reward. The move cost for agent \( i \) at time \( t \) is set as follows: \[ mv_i^t = \begin{cases} -c_j, & \text{if agent } i \text{ moves;} \\ 0, & \text{if agent } i \text{ stays;} \end{cases} \] where \( j \) is the group index of agent \( i \). RLD3 utilizes double critic-networks and double actor-networks, with the delayed copy used for soft-update. During the training stage, a group network can access the experienced data of all agents belonging to that group, stored in a replay buffer. Therefore, a network can efficiently explore different individuals of the same category in the metropolis and gather more experiences. During the execution stage, each agent calls its corresponding group network to perform policy execution independently. The policy input for each agent is based on its current observation while the output is its deterministic action. To transform the continuous output seven-dimensional vector into a deterministic action, the last layer uses Gumble-Softmax (Jang et al., 2017). Such mixed strategy ensures that even agents of the same group at the same location may execute different discrete actions, avoiding competition among agents. The information flow during the execution stage is illustrated in Figure 2. 3.3 Towards Feedback Sparsity through Space Potential Since the immediate match reward is highly sparse for the DRL method in designated driving platforms (i.e., it only occurs at the time step with a successful order match, which is rare), we introduce a dense neighborhood potential reward $nb_i^t$ to reflect the potential value of the current area. The intuition is that the distance to an order in the neighborhood reflects how fast an agent can pick up the order. Almost all orders in the neighborhood are attractive to the driver, although the closest ones are especially attractive. Specifically, we assign potential values to nearby unmatched orders, with higher feedback given to closer orders. We then sum up all potential values to represent the total potential value of the driver’s current position. This provides reward feedback to the driver at every time step, compensating for the sparse immediate match reward. The potential reward is defined as follows: $$nb_i^t = (d^* + 0.1)^{-0.5} + 0.1 \times \sum_{\text{neighbor order } j} (d_{ij} + 0.1)^{-0.5},$$ where $d_{ij}$ denotes the distance from driver $i$ to order $j$, and $d^*$ denotes the distance to the closest order. The power index is set to $-0.5$ to ensure that the potential reward increases as the distance approaches and is a convex function, in order to encourage designated drivers to approach a specific order rather than maintain an equal distance from all orders. 3.4 Towards Interaction Sparsity through Window Lasting In designated driving platforms, agents are often far away from each other, resulting in sparse and long-term agent interactions instead of single-step actions. For example, a driver’s income is not directly influenced by the short-term actions of drivers located far away, but rather by the accumulated distribution changes caused by the lasting movements of drivers. Therefore, we use the average action over a time window, instead of a single-step action, when considering other agents’ policies. To achieve this, in addition to recording regular tuples $(s, a[N], r[N], s')$, the buffer calculates and stores the window-lasting actions for all agents. The window-lasting action $\tilde{a}_i$ represents the average of sequential idling actions for the last $W$ time steps: $$\hat{a}_t^i = \mathbb{E} [a_t^i], s \sim [t - W, t] \cap T_{\text{last idle}},$$ (6) where $T_{\text{last idle}}$ refers to the most recent period in which the driver was idling, considering possible different idling periods that may result in diverse moving directions. Thus, the mean-field effect for group $j$ is defined as: $$g_j^t = \mathbb{E}_{t \in \text{group } j} [\hat{a}_t^i].$$ (7) Additionally, we use an encoder in the input of the critic to handle complex state representations and their varying dimensions. This encoder is responsible for the distribution of the current unmatched orders and idling agents respectively. We employ the K-Means algorithm (Hartigan & Wong, 1979) for this encoder. Therefore, the input structure of the critic network is $Q_i(o_i, \text{encode}(s), a_i, g[M])$, as shown in Figure 3. All networks utilize two fully connected layers and the GELU activation function (Hendrycks & Gimpel, 2016). ![Network structure](image) Figure 3: Network structure. ### 3.5 NETWORK UPDATE The network update follows the gradient-based actor-critic paradigm. To ensure smoother driver trajectories, we add the temporal difference of adjacent actions $H(a, a') = \|a - a'\|_2$ to the Bellman loss as the critic loss. After incorporating the above techniques, the loss function for the value network becomes: $$L(\theta_i) = \mathbb{E}_{\text{sample} t} \left[ (Q^\pi_i(o_i, \text{encode}(s), a_i, g[M]) - y)^2 + \lambda H(a_i, a'_i) \right],$$ $$y = r_i + \gamma Q^{\pi'}_i(o'_i, \text{encode}(s'), a'_i, g[M]).$$ (8) Similarly, the gradient of the policy network is now: $$\nabla_{\theta_i} J(\pi_i) = \mathbb{E}_{\text{sample} t} \left[ \nabla_{\theta_i} \pi_i(a_i | o_i) \nabla_{a_i} Q^\pi_i(o_i, \text{encode}(s), a_i, g[M]) | a_i = \pi_i(o_i) \right].$$ (9) The complete algorithm framework is summarized in Algorithm 1. ### 4 SIMULATOR & EXPERIMENT We design and implement a simulator based on real-world datasets to train and evaluate RL algorithms for the designated driver dispatch problem. We then conduct experiments on our proposed model using the simulator and real-world data. We sample 50 drivers and 500 orders for the training stage. Each experiment is repeated with 4 different seeds, and the average results with confidence intervals are presented. To mitigate the sparsity issue in early training, we use the first 100 episodes for random exploration. Algorithm 1 RLD3. Require: order data, driver pool $[N]$, episode number $MAX$, episode length $T$, learning rate $\lambda$, update rate $\tau$, batch size $S$, group number $M$, window size $W$. 1: for episode from 1 to $MAX$ do 2: Initialize environment and receive an initial state $s$. 3: for $t$ from 1 to $T$ and not all drivers are off-line do 4: Generate action $a_t = \pi_t(o_t)$. 5: Execute action $(a_1, a_2, \cdots, a_N)$ and observe reward $r$ and next state $s'$. 6: Push $(s, a, r, s', \hat{a})$ into buffer. 7: $s = s'$. 8: end for 9: for group $j$ from 1 to $M$ do 10: Sample a batch of $S$ samples $(s, o_i, a_i, r_i, s', \hat{a}_i)(i \in \text{group } j)$ from replay buffer. 11: Update critic by minimizing $L(\theta_j)$. 12: Update actor using sample policy gradient $\nabla_{\theta_j} J$. 13: end for 14: Update the target network parameter for each agent $i$ by $\theta'_i = \tau \theta_i + (1 - \tau) \theta'_i$. 15: end for 4.1 Simulator The simulator is built based on real-world designated driver and order datasets from Hangzhou, a city in China. The datasets include over 3,000 drivers and nearly 13,000 orders per day. Each order’s information consists of its coordinates and the time of generation, match, completion, and possible cancellation. Each driver’s information includes their online time, offline time, and online coordinates. The simulator models the entire process of how the states of drivers and orders evolve. It includes a driver dispatch module that allows for the repositioning of any idling driver. The simulator serves as a training environment for RL algorithms and can also evaluate the performance of various dispatch policies. The detailed introduction of the simulator is in Appendix A. 4.2 Performance Comparison We compare the performances of our algorithm with existing DRL methods and optimization-based policies. The benchmark DRL algorithms include independent DDPG (Lillicrap et al., 2016), MADDPG (Lowe et al., 2017), MAMFRL (Yang et al., 2018), and multi-agent version RND (Burda et al., 2019). These algorithms are applied with the immediate match reward and move cost to achieve a trade-off between match and movement. All DRL algorithms use the same two hidden layers of dimension 64 and batch size of 512. The update rate is set to 0.01, and the learning rate policy uses the Adam optimizer (Kingma & Ba, 2015) with an initial rate of 0.01. All DRL algorithms are trained for 1000 episodes. In RLD3, the lasting window size is set to 60 steps, and the group number is set to 5. The optimization-based policies included order-oriented random-walk, Max-throughput dispatch policy (Robbenolt & Levin, 2023), and model predictive control (MPC) (Zhang et al., 2016). To ensure fairness in comparison, optimization-based methods estimate the current order and driver dynamics based on past history. Figure 4: Training performance. The order of the legends in the figure is the same as the order of performances in the last episode. Table 1: Testing performance. The testing performance is evaluated based on the model that has the best episodic performance, while the IID generalization performance is measured using an additional testing dataset of 10 episodes that are separated from the training dataset. | Algorithm | Testing performance | IID generalization | |-----------------|---------------------|--------------------| | | Order | Distance (km) | Order | Distance (km) | | Our Algorithm | RLD3 | 237.2 ± 3.4 | 7.0 ± 1.3 | 234.0 ± 3.9 | 7.0 ± 1.3 | | Taxi-dispatch | Deep-dispatch | 233.3 ± 2.3 | 15.5 ± 2.4 | 229.0 ± 2.3 | 15.1 ± 2.3 | | DRL-based | DDPG | 186.9 ± 5.2 | 27.8 ± 3.0 | 183.1 ± 5.5 | 27.9 ± 3.1 | | | MADDPG | 215.7 ± 3.7 | 29.6 ± 0.6 | 212.0 ± 4.4 | 30.2 ± 0.5 | | | MADDPG-RND | 228.6 ± 3.5 | 65.3 ± 0.7 | 224.0 ± 3.7 | 66.3 ± 0.9 | | | MAMFRL | 224.3 ± 3.7 | 34.3 ± 5.3 | 221.1 ± 4.8 | 34.9 ± 5.5 | | Optimization | Random | 180.1 | 35.3 | 178.3 | 34.4 | | | Max-throughput | 229.8 | 73.2 | 228.8 | 73.1 | | | MPC | 228.1 | 1.7 | 228.2 | 1.5 | As shown in Figure 4 and Table 1, our model outperforms all other algorithms in terms of the number of completed orders and had a smaller moving distance compared to methods that had similar completed order performance. As mentioned in Section 2, RND falls into no-semantic exploration due to always moving; MADDPG and MAMFRL fail to differentiate the value of different directions when there are no nearby orders, resulting in a significant amount of random walking. For optimization baselines, the Max-throughput policy optimizes the Lyapunov drift by treating the drivers as servers, which in turn leads to intense competition among drivers for orders. As one of the most popular algorithms in control theory, MPC outperforms the DRL baselines, except for our proposed algorithm RLD3. 4.3 INDEPENDENT AND IDENTICALLY DISTRIBUTED (IID) GENERALIZATION We conducted IID Generalization experiments to assess the robustness and generalization of our algorithm. In IID Generalization, it is assumed that the data points in both the training and testing datasets are drawn independently and identically from the same underlying distribution (Kirk et al., 2023). The generalization performance is then synonymous with the test-time performance from IID samples. We sampled another 500 orders from the real-world data that were not seen during training in every episode. As shown in Table 1, our algorithm did not decline significantly in IID performance and still outperformed other methods. An interesting phenomenon is that all algorithms demonstrate good IID generalization performance. This is because the designated driving platform itself exhibits sparsity, and the hotspots of orders are concentrated. Since we maintain the same initial state for all drivers and the same order underlying distribution in the IID generalization test, drivers are still able to effectively transfer the learned hotspot information from previous experiences when moving. 4.4 ABLATION STUDY We conducted an ablation study on the group-sharing structure, agent interaction design, state encoder, and reward design to gain insights into our model’s settings and behavior. **Group Number.** The group number is a typical hyperparameter that determines the number of agent types. A larger group number can better represent the heterogeneity of drivers, but it also increases the storage pressure and training time. Additionally, a large group number may not learn well in sparse feedback situations. The results in Table 2 show that the group-sharing structure helps improve the performance of MADDPG and our proposed algorithm RLD3. **Window-lasting Agent Interaction.** Our algorithm uses a window-lasting policy ensemble in the updating stage to better learn the cross-effects of other agents’ policies. We evaluated the algorithm without the window average. As shown in Table 2, the model without the window-lasting interaction cannot learn others’ policies well. This could be due to high-frequency fluctuations in agent actions that are difficult to learn, as Table 2: Ablation study. | Algorithm | Order | Distance (km) | |----------------------------|---------|---------------| | RLD3 | 237.2 ± 3.4 | 7.0 ± 1.3 | | RLD3 for 1 group | 150.5 ± 9.2 | 21.5 ± 1.2 | | RLD3 for 50 groups | 231.7 ± 3.6 | 9.1 ± 0.4 | | MADDPG for 5 groups | 223.0 ± 3.6 | 24.5 ± 1.2 | | MADDPG-RND for 5 groups | 211.5 ± 7.4 | 56.1 ± 1.8 | | MAMFRL for 5 groups | 227.0 ± 10.2| 6.1 ± 1.7 | | RLD3 without window-lasting| 229.5 ± 3.2 | 27.1 ± 3.4 | | RLD3 without state encoder | 232.2 ± 3.7 | 27.2 ± 1.6 | | RLD3 without potential reward | 223.8 ± 3.8 | 6.7 ± 1.2 | | RLD3 without move cost | 210.7 ± 6.7 | 48.3 ± 2.6 | well as the fact that single-step actions may not be executed for agents that are not idling. Consequently, the value function underfits when other agents’ policies are ensembled without the window average. State Encoder. To capture the distribution information of orders and drivers during the training stage, we employ an encoder to encode the system’s state. It is worth noting that due to the varying number of orders and drivers, the dimensions of the state vector are constantly changing, making it difficult to directly utilize by the value function. Therefore, we extract the distribution information of orders and drivers separately using the K-Means method. As shown in Table 2, such a state encoder can assist DRL algorithms in better understanding the state of the designated driving platform, particularly in extracting driver-order distribution information. Additionally, when comparing the performance of our algorithm without the state encoder and traditional DRL baselines that only utilize observation information, our algorithm still outperforms them due to the benefits of group-sharing and window-lasting interaction techniques. Reward Design. We compared different reward components by removing the neighborhood potential reward and move cost, as shown in Table 2. All reward settings were tested with our proposed group-sharing structure and training process. The dense potential reward not only increases performance but also stabilizes the training process, as indicated by the much smaller value function loss. While the model without the cost falls into a suboptimal situation where only order numbers are optimized, ignoring distance constraints. 5 CONCLUSION In this paper, we addressed the problem of driver dispatch in designated driving platforms, which is a complex scenario with sparsity issues and strict constraints. To capture the spatiotemporal dynamics of imbalanced demand-supply relations, we proposed a novel multi-agent deep reinforcement learning (DRL) algorithm based on the decentralized partially observed Markov decision process (Dec-POMDP) formulation. Our algorithm leverages a group-sharing structure and a specially designed reward to address the trade-off between sparsity, scalability, and heterogeneity. The window-lasting agent interaction technique enables our algorithm to handle the long-lasting cross-effect of agents. Through extensive experiments on a simulator based on real-world data, we demonstrated that our algorithm outperformed traditional optimization-based policies and existing DRL algorithms in terms of completed order numbers and moving constraints. The results highlight the effectiveness of our approach in addressing the challenges of the designated driver dispatch problem. In future work, we aim to make the grouping process trainable by incorporating self-supervised algorithms such as clustering. This would enable us to better model the interactions between agents and enhance the performance of our algorithm. Additionally, we are interested in studying the impact of non-compliance on the performance of driver dispatch, as existing literature often assumes drivers’ full compliance. Understanding and addressing non-compliance issues can further enhance the effectiveness of our algorithm in real-world scenarios. ETHICS STATEMENT During the data collection process, we filtered out all personal information regarding designated drivers and orders and used virtual IDs to prevent the leakage of behavior patterns. In the experimental design, we did not employ any discriminatory strategies towards any specific driver or order. Our optimization objective is to maximize the gross merchandise volume of the entire platform, thereby improving service quality while increasing workers’ income. REPRODUCIBILITY STATEMENT To facilitate reproducibility, we provide a detailed description of the models and training details in the main text. We also list all relevant parameters in the appendix. If the paper is accepted, we will provide an open-source link in the camera-ready version. REFERENCES Lina Al-Kanj, Juliana Nascimento, and Warren B Powell. Approximate dynamic programming for planning a ride-hailing system using autonomous fleets of electric vehicles. *European Journal of Operational Research*, 284(3):1088–1106, 2020. Yuri Burda, Harrison Edwards, Amos J. Storkey, and Oleg Klimov. Exploration by random network distillation. In *7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019*. OpenReview.net, 2019. BusinessGrowthReport. Global designated driving service market research report 2022. [https://www.businessgrowthreports.com/TOC/22043825](https://www.businessgrowthreports.com/TOC/22043825), 2022. Caroline Claus and Craig Boutilier. The dynamics of reinforcement learning in cooperative multiagent systems. In *Proceedings of the Fifteenth National/Tenth Conference on Artificial Intelligence/Innovative Applications of Artificial Intelligence, AAAI ’98/IAAI ’98*, pp. 746–752, USA, 1998. Soheil Sadeghi Eshkevari, Xiaocheng Tang, Zhiwei Qin, Jinhan Mei, Cheng Zhang, Qianying Meng, and Jia Xu. Reinforcement learning in the wild: Scalable RL dispatching algorithm deployed in ridehailing marketplace. In *KDD ’22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14 - 18, 2022*, pp. 3838–3848, 2022. Yannis Flet-Berliac and Philippe Preux. Only relevant information matters: Filtering out noisy samples to boost RL. In Christian Bessiere (ed.), *Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020*. ijcai.org, 2020. Sriram Ganapathi Subramanian, Pascal Poupart, Matthew E. Taylor, and Nidhi Hegde. Multi type mean field reinforcement learning. In *Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems*, AAMAS ’20, pp. 411–419, Richland, SC, 2020. Guy Hacohen and Daphna Weinshall. On the power of curriculum learning in training deep networks. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), *Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA*, volume 97 of *Proceedings of Machine Learning Research*, pp. 2535–2544. PMLR, 2019. J. A. Hartigan and M. A. Wong. A k-means clustering algorithm. *Journal of the Royal Statistical Society: Series C (Applied Statistics)*, 28(1):100–108, 1979. Dan Hendrycks and Kevin Gimpel. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. *CoRR*, abs/1606.08415, 2016. Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. In *5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings*. OpenReview.net, 2017.
uvFhCUPjtI
The timespan of edges is a natural attribute of a temporal graph. Some recurrent works [1] [2] show that embedding the timespan of edges is important, especially in the sequential recommendation. I am wondering whether EFT could embed the timespan of edges and how. Such discussion may help the audience have better ideas on applying or extending EFT in solving their own problems in different application domains.
Beyond Spatio-Temporal Representations: Evolving Fourier Transform for Temporal Graphs Anson Bastos\textsuperscript{1,2}, Kuldeep Singh\textsuperscript{4}, Abhishek Nadgeri\textsuperscript{3}, Manish Singh\textsuperscript{2}, Toyotaro Suzumura\textsuperscript{5} \textsuperscript{1}HERE Technologies, India \textsuperscript{2}Indian Institute of Technology Hyderabad, India \textsuperscript{3}RWTH Aachen, Germany \textsuperscript{4}Cerence Gmbh, Germany \textsuperscript{5}The University of Tokyo, Japan ansonbastos@gmail.com, kuldeep.singh1@cerence.com, abhishek.nadgeri@rwth-aachen.de msingh@csce.iith.ac.in, suzumura@acm.org Abstract We present the Evolving Graph Fourier Transform (EFT), the first invertible spectral transform that captures evolving representations on temporal graphs. We motivate our work by the inadequacy of existing methods for capturing the evolving graph spectra, which are also computationally expensive due to the temporal aspect along with the graph vertex domain. We view the problem as an optimization over the Laplacian of the continuous time dynamic graph. Additionally, we propose pseudo-spectrum relaxations that decompose the transformation process, making it highly computationally efficient. The EFT method adeptly captures the evolving graph’s structural and positional properties, making it effective for downstream tasks on evolving graphs. Hence, as a reference implementation, we develop a simple neural model induced with EFT for capturing evolving graph spectra. We empirically validate our theoretical findings on a number of large-scale and standard temporal graph benchmarks and demonstrate that our model achieves state-of-the-art performance. 1 Introduction In numerous practical situations, graphs exhibit temporal characteristics, as seen in applications like social networks, citation graphs, and bank transactions, among others (Kazemi et al., 2020). These temporal graphs can be divided into two types: 1) temporal graphs with constant graph structure (Grassi et al., 2017; Cao et al., 2020), and 2) temporal graphs with dynamic structures (Zhou et al., 2022; Bastos et al., 2023; da Xu et al., 2020). Our focus in this work is the latter case. The evolving graphs have been comprehensively studied from the spatio-temporal graph-neural network (GNN) perspective, focusing on propagating local information (Pareja et al., 2020; Shi et al., 2021; Xiang et al., 2022; da Xu et al., 2020). Albeit the success of spectral GNNs for static graphs for capturing non-local dependencies in graph signals (Wang & Zhang, 2022), they have not been applied to temporal graphs with evolving structure. To make spectral GNN work for temporal graphs effectively and efficiently, there is a necessity for an invertible transform that collectively captures evolving spectra along the graph vertex and time domain. To the best of our knowledge, there exists no such transform in the spectral domain for temporal graphs with evolving structures. In the present literature, Graph Fourier Transform (GFT), which is a generalization of Fourier Transform, exists for static graphs but cannot capture spectra of evolving graph structure (Shuman et al., 2013). Hence, it cannot be applied to temporal graphs due to the additional temporal aspect. One naïve extension would be to treat the time direction as a temporal edge, construct a directed graph with newly added nodes at each timestep, and find the Eigenvalue Decomposition (EVD) of the joint graph. However, this would lose the distinction between variation along temporal and vertex domains. Moreover, such an approach would incur an added computational cost by a multiplicative factor of $\mathcal{O}(T^3)$, which would be prohibitively high for the temporal setting with a large number of timesteps. Thus, in this paper, we attempt to find an approximation to the dynamic graph transform that would capture its evolving spectra and be efficient to compute. We aim to propose a novel transform for a temporal graph to its frequency domain. For this we consider the Laplacian of the dynamic graph and find the orthogonal basis of maximum variation to obtain the spectral transform (Hammond et al., 2011). We view this as an optimization of the variational form of the Laplacian such that the optimal value is within the $\epsilon-$ pseudospectrum (Tao, 2008). We then show that such optimization gives us a simple and efficient to compute solution while also being close to the exact solution of the variational form under certain conditions of Lipschitz continuous dynamic graphs. Effectively, we propose a method to simultaneously perform spectral transform along both the time and vertex dimensions of a dynamic graph. This solves the following challenges over the natural extension of EVD over dynamic graphs: 1) The proposed transformation is computationally efficient compared to the direct eigendecomposition of the joint Laplacian. 2) Distinction between time and vertex domain frequency components with the proposed transform provides interpretability to the transformed spectral domain. We term the proposed concept as "Evolving Graph Fourier Transform" (EFT). In summary, we make the following key contributions: • We propose EFT (grounded on theoretical foundations), that transforms a temporal graph to its frequency domain for capturing evolving spectra. • We provide the theoretical bounds of the difference between EFT and the exact solution to the variational form and analyze its properties. • As a reference implementation, we develop a simple neural model induced with the proposed transform to process and filter the signals on the dynamic graphs for downstream tasks. We perform extensive experimentation on large-scale and standard datasets for dynamic graphs to show that our method can effectively filter out the noise signals and enhance task performance against baselines. 2 RELATED WORK Spectral Graph Transforms: Work by (Hammond et al., 2011) was among the first to propose a computationally efficient algorithm to compute the Fourier Transform for static graphs. Loukas et al. (Loukas & Foucard, 2016) conceptualized Joint Fourier Transform (JFT) over graphs on which the signals change with time. JFT has been generalized in (Kartal et al., 2022) by proposing the Joint Fractional Fourier Transform (JFRT). However, JFT and JFRT does not consider graph structures evolving with time. (Cao et al., 2021) apply JFT and propose a model for time series forecasting. (Villafane-Delgado & Aviyente, 2017) summarized graphs over time by using Tucker decomposition to the dynamic graph Laplacian in order to obtain an orthogonal matrix and further applies it to a cognitive control experiment. However, this method does not fully capture the varying graph information in a lossless sense. Researchers have also proposed spectral methods for spatio-temporal applications such as action recognition (Yan et al., 2018; Pan et al., 2020), traffic forecasting (Yu et al., 2017) etc. Other works such as (Mahyari & Aviyente, 2014; Chen et al., 2022; Sarkar et al., 2012; Kurokawa et al., 2017; Jiang et al., 2021; Cheng et al., 2023) also consider temporal graphs, but ignore the evolving structure. We position our work as the novel spectral graph transform for temporal graphs which is currently a gap in existing literature. Temporal Graph Representation Learning: Since static graph methods do not work well with dynamic graphs (Pareja et al., 2020), researchers have proposed a slew of methods (Pareja et al., 2020; Goyal et al., 2020; Xiang et al., 2022), for learning on dynamic graphs for problems such as link prediction and node classification. One elementary way to adapt methods developed for static graphs on dynamic graphs is to use RNN modules in conjunction with GNN modules to capture the evolving graph dynamics. Researchers (Seo et al., 2016; Narayan & Roe, 2018; Manessi et al., 2020) have explored this idea extensively. Some other recent approaches model several real world phenomena, however, these methods rely on an RNN for encoding temporal information such as (Bastas et al., 2019; da Xu et al., 2020; Ma et al., 2020), etc. Most generic among these works is TGN (Temporal Graph Networks) (Rossi et al., 2020) that remembers nodes and connections it has seen in the past, and then uses that memory to update new nodes and connections that it hasn’t seen before. However, the memory updater uses GRU which may have issues such as vanishing gradient limiting the ability to capture long term information. Also, these models have been studied for small-graphs spread over limited time duration (e.g., one month). Considering large scale temporal graphs with evolving structures, one such application is that of sequential recommendation (SR) with decades of temporal information (1996-2018) (Zhang et al., Researchers (Li et al., 2020; Zhang et al., 2022; Jing et al., 2022) have attempted to model the sequential recommendation task as a link prediction over dynamic graphs. DGSR (Zhang et al., 2022) is a work that considers generic dynamic graphs over user-item interactions. However, the GNN-based methods described in this section including DGSR majorly employ low pass GNNs that limit the ability to model complex relations and are fundamentally restricted to focus on local neighborhood interactions (Balciar et al., 2020). 3 Preliminaries Discrete Fourier Transform (DFT) (Sundararajan, 2023) is employed to obtain the frequency representation of a sequence of signal values sampled at equal intervals of time. Consider a signal $x$ sampled at $N$ intervals of time $t \in [0, N - 1]$ to obtain the sequence $\{x_t\}$. The DFT of $x_t$ is then given by $X_k = \sum_{t=0}^{N-1} x_t e^{-i\omega t k}$ with $\omega = \frac{2\pi}{N}$. The transformed sequence $X_k$ gives the values of the signal in the frequency domain. If we represent $X$ as the vector form of the signal, we can define the DFT matrix $\Psi_T$ such that $X_k = \Psi_T X$. Graph Fourier Transform (GFT) (Ortega et al., 2018) is a generalization of the Discrete Fourier Transform (DFT) to graphs. We represent a graph as $(V, E)$ where $V$ is the set of $N$ nodes and $E$ represents the edges between them. Denote the adjacency matrix by $A$. $D$ is the degree matrix, defined as $(D)_i = \sum_j (A)_{ij}$, which is diagonal. The graph Laplacian graph is given by $\hat{L} = D - A$ and the normalized Laplacian $L$ is defined as $L = I - D^{-\frac{1}{2}} AD^{-\frac{1}{2}}$. The Laplacian $L$ has the eigendecomposition as: $L = \Psi_G^* \Lambda \Psi_G$. Let $X \in \mathbb{R}^{N \times d}$ be the signal on the nodes of the graph. The Graph Fourier Transform $\hat{X}$ of $X$ is then given as: $\hat{X} = \Psi_G X$. Pseudospectrum: The spectrum of a graph (of $N$ nodes) is a finite set consisting of $N$ points $\lambda$ that form the eigenvalues of the graph’s matrix representation $M$ i.e. $\{\lambda \in \mathbb{C} \mid \| (M - \lambda I)^{-1} \| = \infty \}$. Similarly we can think of the ($\epsilon$-)pseudospectrum of a graph to be the larger set (containing these $N$ points) such that $A - \lambda I$ has the least singular value at most $\epsilon$. Formally the pseudospectrum can be defined by the set $\{\lambda \in \mathbb{C} \mid \| (M - \lambda I)^{-1} \| \geq \frac{1}{\epsilon} \}$. Common Notations: We denote by $\oplus$, $\otimes$ the Kronecker sum, product respectively. $(M)_i^j$ refers to the $i$-th row and $j$-th column of matrix $M$. $\{.\}$ refers to a sequence, of elements, in time. $\boxtimes$, $\boxplus$ refer to the Kronecker product and sum respectively, applied timestep wise. 4 Theoretical Framework: An Optimization Perspective We begin by striving for a physical interpretation of frequency for dynamic graph systems. For this, we draw inspiration from energy diffusion processes and establish similarities with the variation of signals on static graphs. Consider graph $G_t$ at time $t$ with node $n_i \in V_t$ and $n_j \sim_G n_i$ denoting the neighbors of $n_i$ at time $t$. We define a directed graph $J_D$ with the graphs at all timesteps taken as is and a directed edge added from a node at time $t - 1$ (modulo $T$) to its corresponding node at time \( t \). For continuous time dynamic graph the previous time would be represented by \( t - dt \) (modulo \( T \)). Let \( X_{n_i,t} \) represent the energy of the signal on node \( n_i \) at time \( t \). The flow of energy to the node \( n_i \) at time \( t \) can be represented by the divergence of the gradient (\( \Delta_{n_i,t}X \)) of the energy. We define the variation of the signals at time \( t \) and node \( n_i \) as follows: \[ \| \Delta_{n_i,t}X \|_2 = \left[ \sum_{n_j \sim n_i} \left( \frac{\partial X}{\partial e_{n_i,n_j}} \right)^2 \right]^{\frac{1}{2}} = \left[ \sum_{n_j \sim n_i} \left( X_{n_j,t} - X_{n_i,t} \right)^2 + \left( \frac{\partial X_{n_i,t}}{\partial t} \right)^2 \right]^{\frac{1}{2}}, \] where \( \frac{\partial X}{\partial e_{n_i,n_j}} \) is the discrete edge derivative on the collective dynamic graph \( J_D \). Considering \( \Delta \) to be the finite difference between neighboring nodes in the joint graph, the global notion of variation (\( S_p(X) \)) can be given by the \( p \)-Dirichlet form as follows \[ S_p(X) = \frac{1}{p} \sum_{n=1}^{N} \int_{t=0}^{T} \| \Delta_{n_i,t}X \|_p^p dt = \frac{1}{p} \int_{t=0}^{T} \sum_{n=1}^{N} \left[ \sum_{n_j \sim n_i} \left( X_{n_j,t} - X_{n_i,t} \right)^2 + \left( \delta X_{n_i,t} \right)^2 \right]^{\frac{p}{2}} dt \] Define \( L_T \) to be the Laplacian of the continuous ring graph representing the nodes at each timestep \( t \in [0,T] \) and connecting consequent nodes. Let \( L_{G_t} \) be the Laplacian of the sampled graph at time \( t \). In the discrete case the Laplacian \( L_{J_D} \) of \( J_D \) can be shown to be \[ (L_{J_D})_i^j = (L_T \otimes I_N)_i^j + (I_T \otimes \{ L_{G_t} \})_i^j = (L_T \oplus L_{G_t})_i^j \] For the case of continuous time, this can be generalized to \[ (L_{J_D}) = L_T \otimes I_N + [I_T \otimes \{ L_{G_t} \}] = [L_T \oplus L_{G_t}] \] where \( \otimes, \oplus \) refers to the timestep wise Kronecker product and sum respectively and \( [.] \) refers to the matricization operation. In the discrete case this operation would convert \( R^{NT \times NT} \rightarrow R^{NT \times NT} \), ordering from the last dimension first. We can now characterize the variation of signals on \( J_D \) similar to static graphs by the following result: **Lemma 1.** (Variational Characterization of \( J_D \)) The 2-Dirichlet \( S_2(X) \) of the signals \( X \) on \( J_D \) is the quadratic form of the Laplacian \( L_{J_D} \) of \( J_D \) i.e. \[ S_2(X) = \int_{i=0}^{NT} \text{vec}(X)(i) \int_{j=0}^{NT} L_{J_D}(i,j) \text{vec}(X)(j) didj = \text{vec}(X)^T L_{J_D} \text{vec}(X) \] This implies that \( L_{J_D} \succeq 0 \) since \( S_2(X) \geq 0 \), which assures us of the existence of the eigenvalue decomposition. Additionally, the value of \( S_2(X) \) is lower when the signal changes slower along the dynamic graph and higher when the signal changes faster. Hence, we can define a notion of signal variation on the dynamic graph that is similar to the variation of signals on static graphs. Consequently, the eigendecomposition of \( L_{J_D} \) characterizes signals on the dynamic graph by projecting them onto the optimizers of \( S_2(X) \). This means that high-frequency components of the evolving dynamic graph represent sharply varying signals, whereas smoother signals will have a higher magnitude in the low-frequency components. From an optimization perspective, we can view the maximum frequency as the optimal value for the below equation, i.e., \[ f_{\max} = \max_{x,\|x\|=1} \int_{i=0}^{NT} x(i) \int_{j=0}^{NT} L_{J_D}(i,j)x(j) didj = \max_{x,\|x\|=1} x^T L_{J_D} x \] The optimal solution \( x \) provides the basis for transforming a dynamic graph signal to obtain its maximum frequency component, denoted by \( f_{\max} \). We can obtain the next frequency values by optimizing equation 4 in orthogonal directions. However, this approach has an issue - the eigenvalue decomposition would have to be performed over a large number of nodes. In a real world setting of temporal graphs with \( T \) timesteps, this method would have a complexity of \( O((NT)^3) \), which would be prohibitive considering large number of timesteps. To address this issue, we relax the objective in equation 4 to include solutions in the pseudospectrum. The solution is presented in the following result, upon which we can formulate a transformation method for temporal graphs. **Lemma 2.** Consider the variational form \( x^T L_{J_D} x = \int_{i=0}^{NT} x(i) \int_{j=0}^{NT} L_{J_D}(i,j)x(j) didj \). The optimization problem \( f = \min_{x,\|x\|=1} ||x^T L_{J_D} x - \lambda_s|| - \epsilon \) has the optimal solution as \( y_\omega \otimes z_t^\omega \), where \( \lambda_s \) is the optimal value of equation 4, \( y_\omega \) is the \( \omega \)-th optimal solution of the variational form of the ring graph, \( z_t^\omega \) is the \( l \)-th optimal solution to the variational form of the graph at time \( t \), \( [s]_+ = \max(s,0) \) and \( \epsilon = O(\delta) \). 5 Constructing an Evolving Graph Fourier Transform In the previous section, we have outlined the theoretical framework for the evolving graph Fourier transform. We also obtained a sketch of the transform as a solution to the optimization problem of the variational characterization with pseudospectrum relaxations. This enables us to obtain a simple and efficient form to compute. In this section, building upon the theoretical framework, we propose our formulation of the Evolving Graph Fourier Transform (EFT). From lemma 2, we obtain the orthogonal basis vectors of the desired transform matrix in terms of the kronecker product of the basis vectors of the Fourier Transform \( \Psi_T \) and Graph Fourier Transform \( \Psi_G \). Thus, lemma 2 helps us to define the EFT in terms of the graph and time Fourier transforms: \[ EFT(f_g, \omega) = \sum_n \Psi_G(f_g, n) \int_{t=0}^{T} f_s(n, t)e^{-j\omega t} dt \] where \( f_g, \omega \) are the graph and temporal frequency components respectively, \( f_s(n, t) \) is the signal at node \( n \) and time \( t \). In terms of the matrix representation, the EFT could be expressed, using the Einstein notation (Albert et al., 1916), as a Kronecker product of DFT and GFT as \( (\Psi_D)_i^j = (\Psi_T \otimes \{\Psi_G\})_i^j \), which when applied to the columnwise vectorized signal \( f_s \) gives the transform in the spectral space. EFT is one of the solutions in the pseudospectrum of \( L_{J_D} \) as shown in lemma 2. There also exists other solutions and specifically considering the case where \( \epsilon = 0 \) we obtain the solution to the exact EVD of \( L_{J_D} \). Let \( \Psi_{AD} \) be the matrix whose rows form the right eigenvectors of \( L_{J_D} \). Since \( \Psi_{AD} \) is the absolute decomposition of \( L_{J_D} \), we term this as AD for brevity. We now define error bounds between \( \Psi_D \) and \( \Psi_{AD} \). **Theorem 1.** Considering bounded changes in a graph \( G \) with \( N \) nodes over time \( T \), the norm of the difference between EFT \( (\Psi_D) \) and AD \( (\Psi_{AD}) \) is bounded as follows: \[ \| \Psi_D - \Psi_{AD} \| \leq O \left( \frac{N}{T} \right)^{\frac{3}{2}} T \varepsilon(\omega_{max}, (\Delta \lambda_G)_{min}, (\Delta \lambda_J)_{min}) \left( \| \dot{L}_G \| \right)_{max} \] where \( (\Delta \lambda_J)_{min} \) and \( (\Delta \lambda_J)_{min} \) refer to the minimum difference between the eigenvalues of matrices \( L_G \) and \( L_{J_D} \) respectively, \( \dot{L}_G \) is the rate of change of \( L_G \) and \( \omega_{max} = 2\pi \) and \( \varepsilon(\omega_{max}, \Delta \lambda_G, \Delta \lambda_J) = \frac{\omega_{max}}{\Delta \lambda_G} + \frac{\omega_{max}}{\Delta \lambda_J} \). The above theorem states that as the structure on the graph evolves infinitesimally, the difference between \( \Psi_D \) and \( \Psi_{AD} \) is bounded from above by the change in the graph matrix representation (laplacian/adjacency). This property is desirable since it allows us to approximate \( \Psi_{AD} \), which is formed by the eigendecomposition of \( L_{J_D} \) and has a physical interpretation, using the defined \( \Psi_D \) that is easy to compute. The above bound is finite if 1) The rate of change of the graph with time is bounded. 2) The eigenvalues have a multiplicity of 1. In such cases, EFT characterizes signals on the dynamic graph by their proximity (projection) to the optimizers of \( S_2(X) \) defined in lemma 1. The physical implication of this is that applying EFT, the high-frequency components correspond to sharply varying signals on a dynamic graph, while low-frequency components correspond to smoother signals. Hence, the norm of the difference between EFT and AD are bounded from above by the rate of evolution of the graphs. For computational purpose in real-world applications, the sampled form of EFT can be obtained by sampling \( T \) snapshots of the dynamic graph signal at uniform time intervals. We now get a dynamic graph \( \{(V_t, E_t)\}, t \in \{0, T\} \) the edges \( (E_t) \) of which by definition evolves with time. We consider the node set \( V \) to be fixed, i.e., no new nodes are added. All the nodes \( |V| = N \) are known from the start, and the graph may contain isolated nodes. In case of node editions, we could create dummy isolated nodes with varying node signals and edge connectivity information. Without loss of generality, consider a 1-dimensional temporal signal, uniformly sampled at \( T \) intervals, residing on the graph nodes. Let \( X \in R^{N \times T} \) represent the temporal signal on the graph nodes. The Fourier transform (DFT) (with DFT matrix \( \Psi_T \)) independently for each node is \( DFT(X) = X \Psi_T^T \). Further, the GFT for the graph \( G_t \equiv (V_t, E_t) \) at time \( t \) is given as \( GFT(X_t) = \Psi_G X_t \), where \( X_t \in R^N \) is the signal on the graph nodes at time \( t \). In order to compute the dynamic graph transform along the graph domain as well as the temporal dimension, we can collectively perform both the operations. Consider \(\{\Psi_{G_t}\} \in R^{N \times N \times T}\) as the tensor containing the graph Fourier basis at each timestep. Then using Einstein notation (Albert et al., 1916), we write EFT as \[ (EFT(\{G_t\}; X))_i^j = (\Psi_{G_t} X)_i^{kk} (\Psi_T^\top)_k^j \] where \(i, j, k\) are tensor indices. Next, we aim to define a transformation matrix for EFT as in DFT and GFT. For this we make use of the Kronecker product (\(\otimes\)) between tensors. We then get the matrix form of EFT as the following expression: \[ (EFT(\{G_t\}; X))_i^j = (\hat{X}_G)_i^j = (\Psi_{G_t} X)_i^{kk} (\Psi_T^\top)_k^j = (\Psi_T \otimes \{\Psi_{G_t}\})_{(j*N+i)}^{km} x_k \] Thus, we have \(\hat{x}_{j*N+i} = (\Psi_T \otimes \{\Psi_{G_t}\})_{(j*N+i)}^{km} x_k\) or \(\hat{x} = \Psi_D x\). In the above equations, \(\hat{X}_G\) is the EFT of signal \(X\) over dynamic graph \(\{G_t\}\), \(x, \hat{x} \in R^{NT}\) are the columnwise vectorized form of \(X, \hat{X}_G \in R^{N \times T}\) and \(m = \left\lfloor \frac{k}{N} \right\rfloor\). \(\Psi_D \in R^{NT \times NT}\) is the EFT matrix over dynamic graph \(\{G_t\}\) with \((\Psi_D)_i^j = (\Psi_T \otimes \Psi_G)_i^j\). We remark from equation 6 of EFT, that the following desirable properties (over the exact eigendecomposition of the joint laplacian) are satisfied: 1) The equation imparts interpretability to the frequency components, whether belonging to the time or vertex domain, as compared to the exact eigendecomposition. This is possible because we are able to decompose the transform into the individual transforms of each domain. 2) The transform equation is computationally efficient as compared to the exact eigendecomposition of the joint laplacian. Specifically EFT reduces the computational complexity for the dynamic graph (\(T\) timesteps) from a factor of \(O(T^3)\) to \(O(T + T \log(T))\). Having derived the EFT transform, we state and prove its properties in the appendix C. The illustration between EFT and other transforms is in Figure 1. The figure shows transforms (GFT, JFT, DFT, EFT) in a circle, and arrows from one transform to the next indicate that the source transform can be obtained by the destination transform using the simulation annotated on the edges. For example, the GFT of a ring graph (\(T\)) gives the DFT and thus the DFT can be simulated by GFT using graph \(T\). Similarly DFT can be simulated by EFT when the number of nodes \(N = 1\). Also the GFT of the temporal ring of a static graph (topologically equivalent to a torus), where the nodes and edges remain constant with time, gives the EFT and vice versa (when time \(T = 1\)). However when the graph structure changes with time GFT cannot be used to simulate EFT. Thus, we can also look at the EFT as a generalization of the previous transforms. We briefly explain the task specific implementation of these modules in the below subsection and focus more on the representations and results in the following sections. 5.1 IMPLEMENTATION DETAILS Having obtained the representations using the proposed transform, we intend to perform filtering in spectral space for dynamic graphs. Since our idea is to perform collective filtering along the vertex and temporal domain in EFT, we need two modules to compute \(\Psi_{G_t}\) (vertex aspect) and \(\Psi_T\) (temporal aspect), respectively, in equation 6 of EFT. We now briefly explain these modules with details in appendix D.2. Filtering along the Vertex Domain: This module computes the convolution matrix \(\Psi_{G_t}\) in equation 6. The frequency response of the desired filter is approximated as \(\hat{\Lambda}_t = \sum_{k=0}^{O_f} c_k T_k(\hat{\Lambda})\), where \(O_f\) is the polynomial/filter order, \(T_k\) is the Chebyshev polynomial basis, \(\hat{\Lambda} = \frac{2\Lambda}{\lambda_{max}} - I\), \(\lambda_{max}\) is the maximum eigenvalue and \(c_k\) is the corresponding filter coefficients. The convolution of the graph signal \(X\) with the filter \((X \ast \hat{\Lambda}_t)\) gives the desired filter response in the vertex domain. Filtering along the Temporal Domain: After performing filtering in the vertex domain, we aim to filter over the temporal signals using \(\Psi_T\) as in equation 6. Formally, let \(X_t \in R^d\) be the signal of a node at time \(t\). Let \(X = \{X_t\} \in R^{T \times d}\) be the time ordered matrix of embeddings of the node. This is converted to the frequency domain \((\hat{X} \in R^{T \times d})\) using the matrix \(\Psi_T\) as \(\hat{X} = \Psi_T X\). Then we multiply \(\hat{X}\) element-wise by a temporal filter \(F_T \in R^{T \times d}\) to obtain the filtered signal \(\hat{X}_f = F_T \odot \hat{X}\) which is then converted back to the temporal domain by using the inverse transform \(\Psi_T^*\) to get \(X_f = \Psi_T^* \hat{X}_f\). \(X_f\) is the filtered signal in the time-vertex domain of the dynamic graph. 6 EXPERIMENTAL SETUP Model Implementation and Datasets: Considering EFT is a spectral transform, we need a base model to induce EFT in it. We select transformer as the base model inspired from (Zhou et al., 2022; Bastos et al., 2022) that induce learnable filters into a vanilla transformer for downstream tasks (implementation is inspired from (Zhou et al., 2022), hence, details are in appendix). To illustrate the efficacy of the representations obtained from EFT, we consider eight datasets. We name our model EFT-T. The first three (Amazon Beauty, Games, CD in Table 3) are large continuous time dynamic graph datasets from sequential recommendation (SR) setting (Huang et al., 2023), spread over two decades. We inherit these datasets, dynamic graph construction process in SR setting, and metric from (Zhang et al., 2022). Other datasets (Pareja et al., 2020) (UCI, AS, SBM, Elliptic, Brain) are standard (discrete) dynamic graph datasets to understand the generalizability of our method and contain a sequence of time-ordered graphs. Details on datasets, metrics, and experiment settings are in Appendix (cf., Table 4). Experiment code and associated datasets are on Github: https://github.com/ansonb/EFT. Baselines: We use baselines depending on the experiment setting for fairness. For SR link prediction, we use strong baselines from previous best (Zhang et al., 2022): BPR-MF (Rendle et al., 2009), FPMC (Rendle et al., 2010), GRU4Rec+ (Hidasi & Karatzoglou, 2018), Caser (Tang & Wang, 2018), SASRec (Kang & McAuley, 2018), HGN (Ma et al., 2019), TiSASRec (Li et al., 2020a), SRGNN (Wu et al., 2019), HyperRec (Wang et al., 2020), FMLPRec (Zhou et al., 2022), and DGSR (Zhang et al., 2022). For link prediction, node classification on discrete dynamic graph datasets, we rely on state of the art approaches of this setting (Xiang et al., 2022): GCN (Kipf & Welling, 2017), GAT (Veličković et al., 2018), GCN-GRU (Pareja et al., 2020), DynGEM (Goyal et al., 2017), GAEN (Shi et al., 2021), EvolveGCN (Pareja et al., 2020), dyngraph2vec (dg2vec) (Goyal et al., 2020). 7 RESULTS AND DISCUSSION This section reports the various experiment results, supporting our theoretical contributions. Denoising and reconstruction on synthetic dataset with perturbation: Here, we aim to study whether EFT can better filter out noise from a dynamic graph than DFT (Sundararajan, 2023) and GFT (Ortega et al., 2018). The graphs are generated by sampling edge weights from a random normal distribution and evolved by perturbing the edge weights from the previous timestep. The graph signals are sampled from the eigenvectors of the graphs at each timestep, while the temporal signals are sampled from a sinusoidal signal. To add an element of complexity and realism, noise is induced along both the graph vertex and time signals (details in appendix D). As a result, the dynamic graph signals evolve with time while being induced with noise along both dimensions. We hypothesize that using EFT, which transforms collectively across time and vertex dimensions, will result in better denoising and signal reconstruction compared to using GFT or DFT, which only performs filtering in one dimension. Our hypothesis is confirmed in Figure 2, which shows a decrease in error as the spectral energy of the signal is preserved while noise is filtered. Moreover, EFT yields comparable results to absolute transform (AD) while requiring less computational resources. Compactness of EFT: Compaction refers to the ability of the transform to summarize the data compactly. A transform with good compaction is desirable as it summarizes the signals well in the frequency components, which can be used for efficient processing by downstream models. In this experiment, we verify the compaction properties of the proposed transform for the time-vertex frequencies on the temporal mesh graphs (Grassi et al., 2017) concerning GFT and DFT. In order to test this, we remove varying percentile of the frequency components from the transformed frequency domain of signal $X$. We then apply the inverse transform to obtain the signal $X_r$. We plot the error $\frac{\|X - X_r\|_p}{\|X\|_p}$ vs the percentile of components removed. From figure 3a, 3b we can see that EFT has a lower error and better compaction and thus is able to summarize the data better than the baselines that only transform along a single dimension of vertex or time. Table 1: For link prediction on large temporal graphs of sequential recommendation setting, table shows our model comparison (EFT-T) on the metrics Recall@10 and NDCG@10. The best results are shown in boldface. The second best result has been underlined. The improvement of our method over the best-performing baseline is statistically significant with p < 0.05. | Metrics | GRU4Rec+ | Caser | SASRec | HGN | TiSASRec | FMLPRec | SRGNN | HyperRec | DGSR | EFT-T | |---------|----------|-------|--------|-----|----------|---------|-------|----------|------|-------| | Recall@10 | 43.98 | 42.64 | 48.54 | 48.63 | 46.87 | 47.47 | 48.62 | 34.71 | 52.40 | **53.23** | | Games | 67.15 | 68.83 | 73.98 | 71.42 | 71.85 | 73.62 | 73.49 | 71.24 | **75.57** | 77.78 | | CDs | 67.84 | 61.65 | 71.32 | 71.42 | 71.00 | 72.41 | 69.63 | 71.02 | **72.33** | **75.42** | | NDCG@10 | 26.42 | 25.47 | 32.19 | 32.47 | 30.45 | 32.38 | 32.33 | 23.26 | 35.90 | **37.10** | | Games | 45.64 | 45.93 | 53.60 | 49.34 | 50.19 | 51.26 | 53.35 | 48.96 | **55.70** | **58.65** | | CDs | 44.32 | 45.85 | 49.23 | 49.34 | 48.97 | 53.31 | 48.93 | 47.16 | **51.22** | **54.99** | Figure 3: Representations on dynamic mesh datasets. Left (a,b): Reconstruction error on the datasets illustrating the compactness of EFT. Right (c): Illustration of filtering using EFT on the dynamic mesh of a Dancer. jointly attenuates high frequency components of the dynamic graph, and whose frequency response is described in Eq. (19) of Grassi et al. (2017). The former filter gives us the frame of the mesh with stiff manoeuvers, whereas the fluid filter produces fluid movements. This experiment shows that EFT can enhance the frequency components non-linearly. This also hints towards why EFT performs better on evolving temporal graphs in subsequent experiments. Performance comparison on (continuous) large-scale temporal graph datasets: The results on the large-scale SR datasets are in Table 1, and EFT-T outperforms baselines on all datasets. We note that our gains to the best baseline are higher in CDs, followed by the Games and Beauty dataset. We observe that as the density of the graph and length of sequences in the data increases (e.g., CD dataset), the performance of EFT-T enhances. We believe that as graph density increases, higher-order connections may encompass noisy relations, a challenge conventional baselines struggle to filter out, whereas our method effectively handles this noise. Also, EFT effectively captures global interactions as it considers the temporal aspect in the collective filtering module. Furthermore, compared to the FMLPRec model that induces DFT into a transformer, EFT-T performs significantly better, concluding the necessity of capturing evolving spectra of temporal graphs. We also note that among the graph-based methods, SRGNN only considers connectivity information from the sequence graph, whereas HyperRec uses higher-order connectivity information. This indicates that not using the graph information effectively hampers performance but using higher-order connectivity without filtering to remove noise also degrades the results. Table 2: Results for Link Prediction (UCI, SBM, AS) and Node Classification (Brn, Ell) tasks. Best values are in bold and second bests are underlined. | Metrics | SBM MAP | SBM MRR | UCI MAP | UCI MRR | AS MAP | AS MRR | Ell P1 | Ell P1 | Brn P1 | |---------|---------|---------|---------|---------|--------|--------|-------|-------|-------| | GCN-GAT | 0.189 | 0.014 | 0.000 | 0.047 | 0.002 | 0.181 | 0.434 | 0.232 | | DynGEM | 0.168 | 0.014 | 0.021 | 0.106 | 0.053 | 0.103 | 0.502 | 0.225 | | GCN-GRU | 0.180 | 0.008 | 0.004 | 0.058 | 0.033 | 0.070 | 0.464 | 0.191 | | dg2vec | 0.098 | 0.008 | 0.004 | 0.054 | 0.033 | 0.070 | 0.442 | 0.215 | | dg2vec v2 | 0.159 | 0.012 | 0.020 | 0.071 | 0.071 | 0.049 | | | | GAEN | 0.1828 | 0.008 | 0.000 | 0.049 | 0.130 | 0.051 | 0.492 | 0.205 | | EGCG-H | 0.195 | 0.014 | 0.013 | 0.090 | 0.153 | 0.363 | 0.391 | 0.225 | | EGCG-O | 0.200 | 0.014 | 0.027 | 0.138 | 0.114 | 0.275 | 0.544 | 0.192 | | LED-GCN | 0.196 | 0.015 | 0.032 | 0.163 | 0.193 | 0.469 | 0.471 | 0.261 | | LED-GAT | 0.182 | 0.012 | 0.026 | 0.149 | 0.233 | 0.384 | 0.503 | 0.150 | | EFT-T | **0.250** | **0.024** | **0.055** | **0.181** | **0.672** | **0.689** | **0.616** | **0.308** | Performance comparison on discrete temporal graph datasets: Table 2 summarizes link prediction and node classification results. Across datasets, our model significantly outperforms all baselines, which focus on learning local dependencies. It illustrates our framework’s effectiveness in filtering noise and amplifying useful signals in evolving temporal graphs. Effectiveness of filtering module (Figure 4): Our approach focuses on capturing useful frequencies along vertex and time dimensions collectively while filtering the noise. Hence, in this experiment, we aim to understand the effectiveness of the filters along both graphs (vertex) and time dimension in the presence of explicitly added noise. Firstly, we induce semantic noise into the system by adding a random vector (sampled from a normal distribution) to the node embeddings. Then, we run experiments on our model with and without learnable collective graph-time filters. To ensure a fair comparison, we keep the parameters in both models the same and simulate the no-filter configuration by using a uniform distribution for the frequency response (all-pass filter). In the presence of noise, the performance of configuration with filters is much better ($p < 0.01$) than that without any filtering. Next, we induce structural noise into the system by adding random nodes/edges. We observe that on inducing structural noise, the performance of the configuration with graph filters is statistically better ($p < 0.01$ using a paired t-test) compared to the one without, confirming that collective filtering is needed to be robust to structural noise in dynamic graphs. Additionally, we plotted the filter frequency responses of EFT on the Games and CDs datasets in Figure 5. The figure shows dominating low-frequency response and higher-frequency components, indicating global aggregation for the long-range interactions. 8 Conclusion In this paper, we introduce a novel approach to transform temporal graphs into the frequency domain, grounded on theoretical foundations. We propose pseudospectrum relaxations to the variational objective obtaining a simplified transformation, making it computationally efficient for real-world applications. We show that the error between the proposed transform and the exact solution to the variational objective is bounded from above and study its properties. We further demonstrate the practical effectiveness for temporal graphs. In the current scope, we do not consider generic signed and directed graphs. To mitigate this, we suggest future works explore generalizing the Laplacian and the resulting transform to such graphs, leveraging techniques proposed in [Mercado et al., 2016] [Cucuringu et al., 2021]. Our work opens up new possibilities for dynamic graph analysis and representation learning, and we encourage researchers to explore potential of EFT as a spectral representation of the evolving graph in downstream graph representation learning models. REFERENCES Einstein Albert, W Perrett, and G Jeffery. The foundation of the general theory of relativity. *Ann. Der Phys*, 49:769–822, 1916. Muhammet Balcilar, Guillaume Renton, Pierre Héroux, Benoit Gaüzère, Sébastien Adam, and Paul Honeine. Analyzing the expressive power of graph neural networks in a spectral perspective. In *International Conference on Learning Representations*, 2020. Nikolaos Bastas, Theodoros Sermertzidis, Apostolos Axenopoulos, and Petros Daras. evolve2vec: Learning network representations using temporal unfolding. In *MultiMedia Modeling: 25th International Conference, MMM 2019, Thessaloniki, Greece, January 8–11, 2019, Proceedings, Part I* 25, pp. 447–458. Springer, 2019. Anson Bastos, Abhishek Nadgeri, Kuldeep Singh, Hiroki Kanezashi, Toyotaro Suzumura, and Isaiah Onando Mulang’. How expressive are transformers in spectral domain for graphs? *Transactions on Machine Learning Research*, 2022. ISSN 2835-8856. URL [https://openreview.net/forum?id=aRsLetumx1](https://openreview.net/forum?id=aRsLetumx1). Anson Bastos, Abhishek Nadgeri, Kuldeep Singh, Toyotaro Suzumura, and Manish Singh. Learnable spectral wavelets on dynamic graphs to capture global interactions. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 37, pp. 6779–6787, 2023. Jonathan M. Blackledge. Chapter 2 - 2d fourier theory. In *Digital Image Processing*, Woodhead Publishing Series in Electronic and Optical Materials, pp. 30–49. Woodhead Publishing, 2005. Defu Cao, Yujing Wang, Juanyong Duan, Ce Zhang, Xia Zhu, Congrui Huang, Yunhai Tong, Bixiong Xu, Jing Bai, Jie Tong, et al. Spectral temporal graph neural network for multivariate time-series forecasting. *Advances in neural information processing systems*, 33:17766–17778, 2020. Defu Cao, Yujing Wang, Juanyong Duan, Ce Zhang, Xia Zhu, Congrui Huang, Yunhai Tong, Bixiong Xu, Jing Bai, Jie Tong, and Qi Zhang. Spectral temporal graph neural network for multivariate time-series forecasting, 2021. Jinyin Chen, Xueke Wang, and Xuanheng Xu. Gc-lstm: Graph convolution embedded lstm for dynamic network link prediction. *Applied Intelligence*, pp. 1–16, 2022. Cheng Cheng, Yang Chen, Yeon Ju Lee, and Qiyu Sun. Svd-based graph fourier transforms on directed product graphs. *IEEE Transactions on Signal and Information Processing over Networks*, 9:531–541, 2023. doi: 10.1109/TSIPN.2023.3299511. Mihai Cucuringu, Apoorv Vikram Singh, Déborah Sulem, and Hemant Tyagi. Regularized spectral methods for clustering signed networks. *The Journal of Machine Learning Research*, 22(1): 12057–12135, 2021. da Xu, chuanwei ruan, evren korpeoglu, sushant kumar, and kannan achan. Inductive representation learning on temporal graphs. In *International Conference on Learning Representations*, 2020. Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. *Advances in neural information processing systems*, 29:3844–3852, 2016. Saul I. Gass and Michael C. Fu (eds.). *Karush-Kuhn-Tucker (KKT) Conditions*, pp. 833–834. Springer US, Boston, MA, 2013. ISBN 978-1-4419-1153-7. Palash Goyal, Nitin Kamra, Xinran He, and Yan Liu. Dyngem: Deep embedding method for dynamic graphs. *IJCAI Workshop on Representation Learning for Graphs*, 2017. Palash Goyal, Sujit Rokka Chhetri, and Arquimedes Canedo. dygraph2vec: Capturing network dynamics using dynamic graph representation learning. *Knowl. Based Syst.*, 187, 2020. Francesco Grassi, Andreas Loukas, Nathanaël Perraudin, and Benjamin Ricaud. A time-vertex signal processing framework: Scalable processing and meaningful representations for time-series on graphs. *IEEE Transactions on Signal Processing*, 66(3):817–829, 2017.
vzvCaYFTLq
Currently, only V100 is considered as the target device. However, newer generations of GPUs are rapidly developing and providing more effective support for lower-bit inference and memory consumption (e.g., H100/A100).
SAPLING: SUCCESSIVE ADAPTATION AND COMPRESSION WITH LAYER DROPPING FOR LLMs Anonymous authors Paper under double-blind review ABSTRACT Specializing Large language models (LLMs) for local deployment and domain-specific use can deliver state-of-the-art performance while meeting latency and privacy requirements. However, conventional task-specific adaptation does not show both memory saving and inference speedup at deployment time. Practical compression techniques like quantization and pruning require hardware support or system optimization to achieve measured inference speedup. We propose Sapling, which can retain LLMs’ capacity in a specific knowledge domain and achieve inference speedup on any hardware and deep learning systems by reducing the model depth. Sapling is based on the knowledge localization phenomenon we empirically observed and verified on LLMs, and achieves model compression via successive layer dropping. We evaluated Sapling on LLaMA-7B. At inference time, the models adapted on medical, legal, and financial datasets have all demonstrated reliable performance, comparable memory saving, 1.2 to $8.5\times$ inference speedup on consumer-level hardware compared to state-of-the-art quantization algorithms, depending on how well the algorithms are supported by efficient accelerator kernels. 1 INTRODUCTION Large language models (LLMs) are gaining prominence, with a growing interest in specializing them for specific domains like medicine (Thirunavukarasu et al., 2023), law (Yue et al., 2023), and finance (Wu et al., 2023b), and deploying locally to address latency and privacy concerns in sensitive data use cases. For example, understaffed clinics can benefit from deploying medical-specialized LLM-based chatbots on local devices. However, the sheer amount of memory and computation required for inference present significant barriers to deploying specialized LLMs in such resource-limited scenarios. Post-training quantization (PTQ) is a primary technique to fit LLMs into resource-limited environments for inference, by reducing the bit precision of LLMs’ weights to as low as 4 or even 3 bits, without significantly degrading model performance. However, to translate theoretical inference speedup into wall-clock speedup, most PTQ methods (Detmers et al., 2022; Xiao et al., 2023; Frantar et al., 2022; Lin et al., 2023) require efficient kernels and even additional support from hardware vendors to provide corresponding quantized computational operators, which, unfortunately, is not easily accessible. Consequently, incorporating the latest quantization techniques in practice often slows down model inference, evidenced in Table 1, with the exception of AWQ (Lin et al., 2023), which is equipped with a decoding implementation that supports quantized weights. Similar results were observed on many post-training LLM pruning algorithms such as Kwon et al. (2022); Frantar & Alistarh (2023a); Sun et al. (2023) which require hardware support for unstructured and structured sparse tensor operations. In light of these limitations, this paper explores a new way of compressing LLMs. We are motivated by recent findings about knowledge localization (Meng et al., 2022b; Li et al., 2023) in LLMs. Particularly, knowledge localization shows that middle layers in LLMs contribute more to the domain-specific knowledge generation process (Meng et al., 2022a; Azaria & Mitchell, 2023). Within each layer, attention modules are more likely to extract general semantic correlation while MLP layers are more task-specific (Geva et al., 2020). Inspired by this phenomenon, we hypothesize that each decoder block, especially its MLP layer, weighs differently for different knowledge domains. By dropping less important layers during fine-tuning, we aim to achieve a balance between memory footprint, inference speed, and domain-specific performance with a shallower specialized LLM. To validate this hypothesis, we conducted extensive layer-dropping experiments on domain-specific datasets (Pal et al., 2022; Chalkidis et al., 2021; Maia et al., 2023), in which we drop one insignificant layer after one epoch of fine-tuning. Layer-dropping results, as shown in Figure 1a, indicate that up to 60% of the parameters can be dropped without significant performance degradation. On the other hand, model specialized to one domain via layer dropping show significantly compromised performance on a different domain. This verifies our hypothesis that different layers of a pre-trained LLM store different domain knowledge. Building on these findings, we introduce Sapling, a model compression framework employing successive layer dropping, capable of compressing LLMs to > 50% of their original size while preserving their domain-specific performance. Sapling uses a calibration dataset to identify and drop the most insignificant layers after each iteration. We also developed a sparse update scheme to only train on the most important layers while neglecting the ones that might eventually be dropped. LLMs pruned via Sapling show comparable ML performance on domain tasks compared to the fine-tuned full model, with far fewer parameters – hence significantly decreased memory and flops requirement at inference. Unlike PTQ or existing pruning methods, Sapling does not alter precision nor introduce sparse computation, therefore it does not depend on specialized kernels. Since Sapling is performed during fine-tuning, it is orthogonal from other model compression techniques. The key contributions of this paper are: (1) We observe and empirically verify the layer-wise knowledge localization phenomenon on contemporary LLMs; (2) We design Sapling, a new approach for model compression. Sapling prunes LLMs during fine-tuning, by discovering and removing unimportant layers. (3) We show Sapling achieves > 2× memory saving and > 2× inference speedup in comparison with the model in full size on medical, legal, and financial domain-specific datasets. We also show Sapling’s ability to realize 1.2 − 8.5× inference speedup than the baseline quantization and pruning approaches. As a side benefit, Sapling offers a flexible "continuum" of target model sizes compared to other compression methods. 2 RELATED WORK Task-specific adaptation. A typical workflow for task-specific adaptation is to first fine-tune (Wu et al., 2023a; Yang et al., 2023; Huang et al., 2023b,a) or even pre-train (Wu et al., 2023b; Cui et al., 2023; Shah et al., 2023) LLMs on task-specific datasets before applying any of the following three model compression techniques for reliable performance during inference: quantization, distillation, and pruning. In our case, we adopt layer-dropping to compress the model step-by-step during fine-tuning, i.e., we adapt LLMs to domain-specific tasks by identifying and retaining important layers for the target domain. Quantization effectively mitigates memory consumption by reducing the bit-widths of LLMs’ weights and activations. Quantization has featured its ability to retain LLM’s zero-shot ability Table 1. Deployment-time model inference overhead breakdown (LLaMA-7B, on single V100 GPU, sequence length 512, batch size 1). The Overhead entry refers to the overhead of running the corresponding model compression algorithm after fine-tuning. The Final Mem entry refers to the ratio of final compressed model size versus the original model size in memory. | Techniques | Overhead (s) | Inference Throughput (tokens/s) | Final Mem | |---------------------|--------------|----------------------------------|-----------| | FP16 | N/A | 16.6 | 100% | | LLM.int8() | 57.3 | 4.1 | ≥ 50% | | GPTQ-int4 | 371.5 | 7.2 | > 25% | | AWQ-int4 | 542.9 | 29.3 | > 25% | | Sparse-GPT (2:4) | 215.4 | 21.2 | 100% | | Masked Pruning | 253.2 | 17.7 | 100% | | Activation-based Pruning | 0.54 | 16.1 | 100% | | Sapling (40%) | N/A | 34.9 | ≥ 40% | with measured memory saving and theoretical speedup. The state-of-the-art quantization algorithms (Dettmers et al., 2022; Xiao et al., 2023) require implementations of efficient kernels whose efficiency relies on hardware support. To realize measured speedup for inference, decoding implementation for the specific quantization format is required (Dettmers et al., 2023; Lin et al., 2023). Sapling, on the other hand, does not depend on specialized kernels and it’s making the model more efficient by reducing its depth. The performance gain can therefore be generalized to any hardware. **Pruning** aims to remove unimportant weights to reduce FLOPs. Latest post-training pruning algorithms for LLMs focus on unstructured sparsity at neuron- or attention-head level (Liu et al., 2023; Sun et al., 2023; Frantar & Alistarh, 2023b) that need efficient kernels and hardware support for the corresponding sparsity patterns, without which it’s hard to achieve measured efficiency improvement. Sapling again requires none. **Layer-dropping**, on the other hand, takes advantage of the layer-wise memory retrieval pattern, that we call layer-wise specialization. Some prior work examines layer-wise specialization by investigating the effect of layer dropping before fine-tuning a foundation model on downstream data (Sajjad et al., 2023) or during the per-training stage (Zhang & He, 2020) (accelerate training with layer-dropping) to improve its efficiency. Sapling conducts layer-dropping during fine-tuning, reducing model size and adapting the model for specialized task simultaneously. **Knowledge localization.** At layer-wise granularity, evidences (Meng et al., 2022b; Frantar & Alistarh, 2023a) show middle decoder blocks in LLMs contribute more to the domain-knowledge generation process while initial blocks are for low-level information (shallow patterns) extraction and last few blocks capture semantic patterns for next-token generation (Azaria & Mitchell, 2023). Within each decoder block, experiments (Geva et al., 2020; Meng et al., 2022a) show that MLP layers are most responsible for task-specific memory retrieval and factual association. The attention layers, on the other hand, are meant to capture semantic correlation among all input tokens and therefore less specialized (Shaw et al., 2018). Sapling leverages different roles MLP and self-attention layers play to localize and drop the most insignificant layer. ## 3 Method In this section, we begin by presenting our hypothesis and empirical evidence concerning the existence of layer-wise specialization for various downstream tasks in §3.1, as well as evidences for LLMs’ ability to retain task-specific performance during fine-tuning as long as the more important layers are trained and updated. These insights, inspired by knowledge localization, inform the overarching fine-tuning framework detailed in Section §3.2, which utilizes successive layer-dropping techniques to make specialized LLMs shallower and more efficient. §3.3 introduces two target selection algorithms. Several metrics are discussed and analyzed as the “importance” scores to choose which attention and MLP layer to drop. The comprehensive algorithm is outlined in Algorithm 1. ### 3.1 Preliminaries and Layer-Wise Specialization Auto-regressive language models compose of a decode-only architecture, where each decoder block is made of one multi-head attention (MHA) layer and MLP layer. Based on observations and findings Algorithm 1 Sapling. 1: **Input:** Training data \( x \in X \) for the domain-specific task, pre-trained LLM \( f(\cdot) \) with parameters \( \theta \), training function \( F(\cdot) \) that optimizes some objective \( \ell \), importance score metric \( s \), sparse update ratio \( r \), accuracy thresholding function \( C_a(a_i) \) or efficiency thresholding function \( C_e(M_i,T_i) \), \( a_i, M_i \) and \( T_i \) are model’s accuracy, memory consumption and latency after the \( i \)-th layer is dropped. Buffers for sets \( A_X \) and \( M_X \) in Hypothesis 1. 2: \( i \leftarrow 0, A_X \leftarrow \emptyset, M_X \leftarrow \emptyset, U_X := A_X \cup M_X, \theta_0 \leftarrow \theta; \) 3: \( G_{U_X} = f(\cdot), n \leftarrow \text{total number of layers in } f(\cdot); \) 4: **Sparse update:** Calculate initial \( s_i \) for each layer. Freeze layers in accordance with \( r \). 5: choose thresholding function \( C(\cdot) \in \{C_a,C_e\} \) that decides whether to exit; 6: **while not** \( C(\cdot) \) **do** 7: Run training function to update the set of all parameters \( F(\cdot) : \theta_i \rightarrow \theta'_i; \) 8: \( m \leftarrow 0, U \leftarrow \emptyset; \) 9: **while** \( m \neq n \) **do** 10: Calculate layer-wise importance score \( s_m \), append \( s_m \) to \( U \); 11: \( m+ = 1; \) 12: **end while** 13: Choose which layer to drop with index \( m \) s.t. \( s_m = \min(U) \), append \( s_m \) to \( U_X \); 14: Remove parameters: \( \theta'_i \rightarrow \theta'_{i+1}; \) 15: Remove layer \( m \) an update the model: \( G_{U_X} \rightarrow G_{U_X+1}; \) 16: **end while** 17: **return** \( G_{U_X} \) from previous studies on knowledge localization as described in Section 2, there are increasing evidences that there exists task-dependent memory retrieval pattern at layer-wise granularity, that we call layer-wise specialization. Formally, consider a pre-trained model \( f(x;\theta) \), where \( x \in \mathbb{R}^s \) is an input sequence with sequence length \( s \) and embedding dimension \( n \), \( \theta \in \mathbb{R}^D \) is a parameter vector that parameterizes \( f(\cdot) \) with a total parameter size of \( D \). Consider layernorm to be part of the MHA and MLP layer along with residual connection with each layer indexed by \( i \in \{1,\ldots,N\} \), where \( N \) is the total number of layers in a model. Let the input to each decoder layer \( \text{DEC}_i \) be \( y_{i-1} \) at the current generation step, the corresponding output at layer \( i \) can be denoted as \[ y_i = \text{DEC}_i(y_{i-1}) := \text{MLP}_i(\text{MHA}_i(y_{i-1})) , \] (1) At \( i = 1 \), the input has \( y_{i-1} = y_0 = (y_{0,1},\ldots,y_{0,T-1},y_{0,T}) \), where \( T \) is the current timestamp and \( y_t \) is token generated by a previous timestamp \( t < T \). Let the feature space for inputs of a downstream task be \( X \) and input tokens \( y_{0,t} \in X \), and the feature space for generated output tokens be \( y_{N,t} \in Y \) in Equation 2 \[ y_N = \text{DEC}_N \circ \text{DEC}_{N-1} \circ \cdots \circ \text{DEC}_0(y_0) = f(y_0;\theta) , \] (2) Our basic assumption is that for each downstream task, there exists a feature space \( X \), where \( X \) can be described as a random variable from a distribution \( D_X \), and \( Y \) is a random variable from \( D_Y \). Our hypothesis is: **Hypothesis 1** Let the set of all attention layers in Equation 1 be \( A \) and the set of all MLP layers be \( M \). For all input sequences \( x_0 \) generated from \( X \), there exists a set of attention and MLP layers \( A_X \subset A, M_X \subset M \) such that the function composition of \( U_X = A_X \cup M_X \) can be fine-tuned on the joint distribution \( D_{XY} \) for the downstream task to get a function \( G_{U_X}(y_0) = y'_N \). It suffices that output of the model \( y'_N \) is generated with random variable \( Y' \) from \( D_{Y'} \) and \( D_{Y'} \) is a close approximation of \( D_Y \) for the full model. Note that the order of function composition for \( U_X \) is in accordance with their original order in Equation 1. To validate our hypothesis, we track the performance of successive layer-dropping on a wide range of QA datasets with different domain specializations as our measure of the resemblance between \( D_{Y'} \) and \( D_{Y'} \). Experiments are conducted on a set of widely adopted QA datasets and performance change is tracked during fine-tuning with a small calibration dataset. Figure 1a indicates the set \( U_X \) and layer-wise specialization exist as Sapling gives competitive performance in comparison with the full fine-tuning baseline with as small as 40% of the layers. ### 3.2 Fine-Tuning with Successive Layer Dropping In addition to the ordinary fine-tuning procedure for language models, Sapling iteratively picks a layer to drop after one epoch of training and gradually reduces the model depth. This gives Sapling the advantages of reduced memory consumption and inference latency at deployment time. Empirical experiments in Figure 1b dictate that among different layer dropping schemes, successive layers dropping during fine-tuning perform much better than batched layer dropping before or after fine-tuning. In other words, drastically changing the model from \( f(y_0; \theta_0) \rightarrow G_{U_X}(y_0; \theta_f) \) by dropping many parameters at a time generally gives bad results (Syed et al., 2023). This function \( G_{U_X}(y_0; \theta_f) \) maps the generated outputs to a distribution \( D_{Y_f} \) that’s very distinct from \( D_Y \) and result in bad domain-specific performance. Note that \( \theta_f \) is the parameter vector and \( D_Y \) is the output distribution for the full model after fine-tuning. Successive layer dropping, on the other hand, allows domain-specific specialization to be done step by step with \( f(y_0; \theta_0) \rightarrow G_{U_{X_i}}(y_0; \theta'_i) \rightarrow G_{U_{X_2}}(y_0; \theta'_2) \cdots \rightarrow G_{U_{X_f}}(y_0; \theta'_f) \) where \( \theta'_i \) is the parameter vector after \( i \) epochs. \( G_{U_{X_i}}(\cdot) \) is the model right after the \( i \)-th epoch with the corresponding set of remaining layers being \( U_{X_i} \). This observation aligns the intuition that gradually changing the function’s parameterization with most important layers retained allows generated outputs to transit more smoothly from \( D_{Y_0} \rightarrow D_{Y_1} \rightarrow \cdots \rightarrow D_{Y_f} \) such that \( D_{Y_f} \) is a close approximation of \( D_Y \) for the full model after fine-tuning. It thereby provides more evidences to verify our hypothesis in Section 5.1 with an additional constraint: **Proposition 1** The functional \( R : f(\cdot) \rightarrow G_{U_{X_i}}(\cdot) \) needs to be decomposed into successive layer-dropping operators \( \{r_0, \ldots, r_f\} \) such that the parameter vector \( \theta'_i \)'s dimensionality only changes by a small decrement at a time to gradually adapts a downstream task with the most representative parameters. Due to the iterative nature of the aforementioned layer dropping algorithm, the time complexity of fine-tuning increases from \( O(1) \) to \( O(N) \) where \( N \) is the number of layers to be dropped. In practical scenarios, this approach enables users to efficiently exchange a longer model adaptation time for improved inference-time performance. This aligns with the typical development-deployment cycle observed in many real-world applications. In such cases, developers often have the flexibility to accommodate longer development periods but place higher demands on deployment-time performance. For instance, during situations characterized by labor shortages, smaller, specialized LLMs designed for medical and financial QA tasks, with low latency, become the preferred choice for large-scale deployment in clinical and banking services. ### 3.3 Target Selection Algorithms One important aspect of Sapling is choosing the right layer from \( U_{X_i} \) to drop after the \( i \)-th epoch and thereby satisfy the successive distribution shift condition (Proposition 1). We introduce two techniques to assign each layer an importance score, where a lower importance score means the layer contribute less to the model’s performance on a downstream task. The first method is a performance scan based on a small calibration dataset. Before each time a layer is to be dropped, a small subset of the fine-tuning dataset’s validation set is sampled as the calibration dataset. For each layer, its importance score is the reciprocal of the model’s performance after dropping the layer. Calibration scanning gives the importance score of any layer \( i \) and the expression is presented in Equation (3) where \( a_i \in [0, 100] \) is the accuracy of the model after dropping the \( i \)-th layer and \( \delta \) is a small positive number such that \( \frac{100}{1+\delta^2} \) is the maximum importance score when \( a_i = 0 \). \[ s_{i,\text{scan}} = \frac{100 - a_i}{(1 + \delta^2) + (1 + \delta) a_i} \] The second method is to make activation-norm comparison on different layers’ activations. Recent studies (Dettmers et al., 2022; Xiao et al., 2023; Sajjad et al., 2023) have shown preserving information carried by activations is critical to model’s performance when it comes to model compression techniques. In prior model compression works, entry-wise absolute values of each layer’s activation tensor are tracked. All outliers with large magnitudes are identified and guarded as failing to preserve their accuracy would result in general performance degradation on many tasks. In our work, our goal is to only preserve activations that are meaningful to the knowledge domain of interest. We can drop the rest to trade the model’s generality for efficiency and specialization. A new metric is therefore needed to quantify the importance of an activation. Our assumption in Section 3.1 is that there exists a feature space \( \mathcal{X} \) and a corresponding low intrinsic dimension (Aghajanyan et al., 2020). Since activation tensors with higher entry-wise matrix norm generally have higher ranks, layers that map inputs to high-rank representations with sparse domain-specific knowledge are less preferred as they contradict our basic assumption. Hence, we use activation-norm metric to identify and drop the layers with high entry-wise matrix norms. Among common matrix norms including the \( \ell_{2,1} \) norm, the Forbenius norm and the nuclear norm, at the same numerical value, the Forbenius norm usually matches with dense and high-rank matrices while the nuclear norm is more likely to match with low-rank ones (Yu & Yiqian, 2018). We choose the Forbenius norm to identify activations with high-rank representations and sparse domain-specific knowledge. Dropping the one with highest norm is analogous to Forbenius norm minimization. Let \( \{ \|X_j\|_F \} \) be the set of Forbenius norm for all remaining layers in the model \( f(\cdot) \). This activation-norm importance score can be expressed in the form of Equation 4 such that \( s_{i,\text{norm}} \in (0, 100] \). \[ s_{i,\text{norm}} = \frac{100 \min \{ \|X_j\|_F \}}{\|X_i\|_F} \] ### 3.4 Sparse update as a Regularization In Sapling, an important observation is that some less important layers will eventually be dropped regardless whether they have been tuned. But evidences show fine-tuning all layers could, in effect, perform worse than only updating a selection of more important layers. There are two reasons for the possible performance degradation. First, catastrophic forgetting has been a well recognized problem when a language model is trained on downstream data with all parameters are updated (Lee et al., 2022). Second, layer dropping in Sapling is conducted on the premise that some layers carry less information for a task and can be discarded. However, fine-tuning all layers is based on a contradictory premise that all layers need to be updated for downstream adaptation. As a result, it’s natural to adopt a sparse update scheme where we only update the layers with greatest chance to be kept after layer dropping. To identity which layers to be updated and which to be frozen, we run layer-wise importance score scanning with a calibration dataset before any fine-tuning is done. This gives an initial distribution of all layers’ importance scores and probability to be dropped in the first epoch. According to Section 4.3, since the initial distribution is highly correlated with the latter ones, we can assume fine-tuning with layer dropping won’t significantly disturb each layer’s importance score and use this initial distribution to infer each layer’s overall probability to be dropped. For a sparse update ratio \( r \), only up to \( N' = r \times N \) layers will be updated in Sapling. It’s possible for any of the \( N' \) layers to be dropped during fine-tuning. Each time this occurs, no additional layers will be made trainable. ### 4 Experiments In this section, we present experiments that provide empirical evidences for our hypothesis as well as the effectiveness of Sapling. The test suite spans a wide range of knowledge domains including common-sense, medical, legal and financial QA benchmarks to demonstrate Sapling’s generalizability on different tasks. All experiments reported in this section are conducted on LLaMA-7B with training and testing performed on NVIDIA V100 32GB servers. Figure 2. The Parento Frontier of LLaMA-7B-Sapling on SciQ and MedMCQA. Sapling has a much wider spectrum of operating points to fit the model into different hardware with competitive performance. Table 2. Performance comparison of LLaMA-7B variants on QA benchmarks. The numerical values are percentage in accuracy. Sapling* here refers to Sapling with sparse update at $r = \frac{1}{4}$, calibration scanning and activation-norm tie breaker. For sparse-FT, the frozen layers are determined by calibration scanning and $r = \frac{1}{4}$. | models | PIQA | SciQ | MedMCQA | LexGLUE_caseload | FinanceQA | Final Mem | |-----------------|------|------|---------|------------------|-----------|-----------| | human (expert) | N/A | N/A | 80.0 | N/A | N/A | N/A | | LLaMA-7B | 77.4 | 89.7 | 22.4 | 32.1 | 33.6 | 100% | | + Full-FT | 82.4 | 95.6 | 54.6 | 42.9 | 45.1 | 100% | | + Sparse-FT | 83.1 | 95.4 | 53.7 | 43.4 | 46.9 | 100% | | + LLM.int8() | 81.7 | 93.6 | 54.0 | 42.0 | 44.9 | > 50% | | p + AWQ-int4 | 78.7 | 91.8 | N/A | N/A | N/A | > 25% | | + Sapling* | 78.1 | 93.4 | 48.6 | 41.9 | 43.2 | > 50% | | + Sapling* | 74.6 | 91.6 | 47.5 | 39.5 | 41.3 | > 40% | | + Sapling* | 68.5 | 87.3 | 45.8 | 36.8 | 38.0 | > 30% | 4.1 PERFORMANCE ON QA BENCHMARKS To test which of the methods can compress the model to the fullest extent while maintaining more than 90% performance of the full-finetuning baseline, we compare the performance of different sparse update schemes and target selection algorithms. The results are summarized in Table 3. On each QA benchmark, we also compare the best specialized model obtained from Sapling and other model compression techniques. The results are presented in Table 2. Methods. In addition to the two target selection methods introduced in Section 3.3, we devise a new two-step algorithm that leverages both methods, which corresponds to the entry “both” in Table 3. This method adopts the more effective calibration scanning as the primary method for layer dropping target selection and uses activation-norm comparison as the tie-breaker strategy when there are more than one layer have the same importance score from calibration scanning. We can see from Table 3 the two-step algorithm gives the best specialized model at every sparse update ratios. For each of the three methods, we evaluate specialized models performance when they are trained with different sparse update ratio $r = \{1, \frac{1}{2}, \frac{1}{4}, \frac{1}{8}\}$. As we can see in Table 3, results show Sapling performs the worst when all layers are updated with a sparse update ratio $r = 1$. With a ratio of $r = \frac{1}{4}$, the model can be compressed to a greatest extent with more than 20 decoder layers dropped while maintaining a satisfactory accuracy ($\geq 90\%$ in comparison with the full fine-tuned model). Baselines. We use full fine-tuning (full-FT) as our most basic baseline. We also include a sparse fine-tuning (sparse-FT) baseline that only updates the salient layers identified by calibration scanning with the optimal sparse update ratio ($r = \frac{1}{4}$). While LLM pruning approaches can give inference speedup as shown in Table 1, they are generally incapable of reducing memory consumption without hardware support. As a result, we benchmark Sapling with the state-of-the-art LLM quantization techniques: LLM.int8(), GPTQ and AWQ. They are used as stronger baselines that permit both memory saving and potential inference speedup. QA benchmarks. We use common-sense QA benchmarks including SciQ (Johannes Welbl, 2017) and PIQA (Bisk et al., 2020) to test LLM’s ability of understanding and making basic inference about Table 3. Performance comparison of LLaMA-7B Sapling variants on QA benchmarks with combinations of sparse update techniques (Section 3.2) and target selection algorithms (Section 3.3). Final model sizes are obtained by running Sapling variants, where layer dropping stops at the moment where performance degrades to <90% of the Full-FT baseline on average. For sparse-FT, the frozen layers are determined by calibration scanning and $r = \frac{1}{4}$. | methods | PIQA | SciQ | MedMCQA | LexGLUE-casehold | FinanceQA | Final Mem | |------------------|------|------|---------|------------------|-----------|-----------| | **LLaMA-7B** | | | | | | | | w/o fine-tuning | 77.4 | 89.7 | 22.4 | 32.1 | 33.6 | 100% | | + Full-FT | 82.4 | 95.6 | 54.6 | 42.9 | 45.1 | 100% | | + Sparse-FT | 83.1 | 95.4 | 53.7 | 43.3 | 46.9 | 100% | | **LLaMA-7B-Sapling ($r = 1$)** | | | | | | | | + calibration | 72.2 | 85.3 | 45.2 | 36.0 | 41.3 | ≥ 70% | | + activation-norm| 74.6 | 44.1 | 41.5 | 34.2 | 39.9 | ≥ 80% | | + both | 73.5 | 89.1 | 46.8 | 36.5 | 40.4 | ≥ 55% | | **LLaMA-7B-Sapling ($r = \frac{1}{2}$)** | | | | | | | | + calibration | 73.1 | 86.2 | 44.9 | 37.1 | 40.3 | ≥ 50% | | + activation-norm| 74.6 | 41.3 | 39.0 | 35.2 | 38.6 | ≥ 75% | | + both | 74.5 | 89.4 | 47.6 | 37.5 | 39.8 | ≥ 40% | | **LLaMA-7B-Sapling ($r = \frac{1}{4}$)** | | | | | | | | + calibration | 74.5 | 86.7 | 45.3 | 36.7 | 41.2 | ≥ 40% | | + activation-norm| 72.7 | 84.6 | 43.5 | 34.9 | 39.5 | ≥ 70% | | + both | 73.1 | 88.9 | 47.0 | 38.0 | 39.8 | ≥ 35% | | **LLaMA-7B-Sapling ($r = \frac{1}{8}$)** | | | | | | | | + calibration | 73.2 | 86.3 | 43.5 | 37.4 | 40.6 | ≥ 60% | | + activation-norm| 74.6 | 83.1 | 41.0 | 33.5 | 39.2 | ≥ 70% | | + both | 74.4 | 90.2 | 44.7 | 38.4 | 39.5 | ≥ 45% | the physical world the way ordinary humans do. To further assess Sapling’s capacity for domain-specific adaptation, we also evaluate its performance on medical, legal, and financial QA datasets: MedMCQA (Pal et al., 2022), LexGLUE-casehold (Chalkidis et al., 2021), and FinanceQA (Bharti, 2023) respectively. For LexGLUE, evaluations are done on the "law" subset of MMLU (Hendrycks et al., 2020). For FinanceQA, the dataset includes a combination of FiQA (Maia et al., 2023), Stanford-Alpaca (Taori et al., 2023), and ChatGPT QA dialogues. Evaluations of are conducted on the "economics" subset of MMLU for its pertinence to financial knowledge. 4.2 Memory Consumption and Latency We argue the Sapling has a two-fold advantage. The first one is efficiency and the other is flexibility. On the efficiency side, Sapling has both deployment-time memory saving and inference speedup. We compare specialized model acquired from Sampling with other quantization baselines as shown in Table 1 and Figure 2. The state-of-the-art quantization techniques are able to reduce inference-time memory consumption to nearly a quarter in size. Sapling exploits the model depth degree of freedom and is able to achieve competitive memory saving compared to the quantization baselines with faster inference speed (Table 1). On the flexibility side, as we can see from Figure 2, quantization and pruning offers a very limited set of operating points corresponding to each of the bit precision scheme for each model. Since sparsity ratio in pruning can not be easily translated into memory saving, pruning oftentimes gives even fewer operating points in the trade-off space. In contrast, the Parento frontiers of Sapling span a wide range of operating points. As a result, Sapling is more flexible and is capable of fitting a model to a wide spectrum of hardware. 4.3 Ablation Studies In this section, we conduct ablation studies to cross-validate the performance of specialized models on other tasks, various layer-dropping patterns, and different levels of layer-dropping granularity. Table 4. Performance of specialized LLaMA-7B on other QA benchmarks. The percentage in parenthesis indicates the percentage of total parameters remained in the specialized model. | model | PIQA | SciQ | MedMCQA | LexGLUE_casehold | FinanceQA | |----------------------------|------|------|---------|------------------|-----------| | w/o fine-tuning (100%) | 77.4 | 89.7 | 22.4 | 32.1 | 33.6 | | PIQA specialized (40%) | 74.6 | 81.1 | 14.4 | 17.8 | 18.2 | | SciQ specialized (40%) | 61.5 | 90.6 | 18.9 | 13.0 | 16.5 | | MedMCQA specialized (40%) | 54.9 | 78.2 | 47.5 | 12.4 | 14.8 | | LexGLUE specialized (40%) | 62.4 | 73.1 | 9.1 | 39.5 | 18.3 | | FinanceQA specialized (40%) | 55.3 | 72.5 | 13.8 | 21.7 | 38.0 | Figure 3. Layer dropping patterns when Sapling (calibration + activation-norm tie breaker) is applied to LLaMA-7B on QA benchmarks. Results for the first 32 iterations are shown. At this point, the model has been reduced to one half of its original size with nearly no performance loss, evidenced in Table 2. The numerical value -1 is assigned to discarded layers as accuracy no longer applies. Performance cross-validation tests specialized models’ performance degradation on other domain-specific tasks to provide more empirical evidences for the existence of layer-wise specialization. Results of each specialized model’s performance on other tasks are provided in Table 4. Layer-dropping Patterns for each of the downstream task shown in Figure 3, there are a few key observations can be made: (1) LLaMA-7B have different layer dropping patterns on different tasks, (2) there are significantly more MLP layers are dropped than the self-attention ones. The first observation provides more empirical evidences for layer-wise specialization while the second for knowledge localization, which argues domain knowledge is stored in MLPs. Multi-layer dropping results are provided in Figure 1b where we try dropping 2 layers at a time to see how well the specialized model is able to retain its performance. However, we find that dropping more than 1 layer at a time breaks the layer-dropping pattern. In cases where two or more consecutive MLP layers and attention layers are removed all together result in sudden accuracy drop. 5 CONCLUSION We propose Sapling, a task-specific adaption and model compression pipeline for contemporary LLMs. Sapling reduces deployment-time memory cost and inference latency by identifying and discarding less significant layers to reduce the specialized model’s depth. Unlike baselines, Sapling can obtain both wall-clock inference speedup and memory saving without the need for specialized hardware and efficient computational kernels. We hope that Sapling paves the path for making LLMs accessible to the wider public in personal and professional use cases. 6 ETHICS STATEMENT While increasing accessibility and lightweighting language models can extend their usability to a wider audience, there are notable downsides to consider. Specializing LLMs may result in reduced accuracy and sophistication in other aspects, making them less capable of handling complex tasks that require knowledge from multiple domains. Furthermore, higher accessibility means users with malicious intent could exploit these models more easily. Striking a balance between accessibility and maintaining the integrity and reliability of language models is essential to ensure their responsible use in various applications. REFERENCES Armen Aghajanyan, Luke Zettlemoyer, and Sonal Gupta. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. arXiv preprint arXiv:2012.13255, 2020. Amos Azaria and Tom Mitchell. The internal state of an Ilm knows when its lying. arXiv preprint arXiv:2304.13734, 2023. Gaurang Bharti. gbharti/finance-alpaca, 2023. URL https://huggingface.co/datasets/gbharti/finance-alpaca. Accessed: 2023-09-20. Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about physical commonsense in natural language. In Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020. Ilias Chalkidis, Abhik Jana, Dirk Hartung, Michael Bommarito, Ion Androutsopoulos, Daniel Martin Katz, and Nikolaos Aletras. Lexglue: A benchmark dataset for legal language understanding in english. arXiv preprint arXiv:2110.00976, 2021. Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, and Li Yuan. Chatlaw: Open-source legal large language model with integrated external knowledge bases. arXiv preprint arXiv:2306.16092, 2023. Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. Llm. int8 (): 8-bit matrix multiplication for transformers at scale. arXiv preprint arXiv:2208.07339, 2022. Tim Dettmers, Ruslan Svirschevski, Vage Egiazarian, Denis Kuznedelev, Elias Frantar, Saleh Ashkboos, Alexander Borzunov, Torsten Hoefler, and Dan Alistarh. Spqr: A sparse-quantized representation for near-lossless llm weight compression. arXiv preprint arXiv:2306.03078, 2023. Elias Frantar and Dan Alistarh. Massive language models can be accurately pruned in one-shot. arXiv preprint arXiv:2301.00774, 2023a. Elias Frantar and Dan Alistarh. Sparsegpt: Massive language models can be accurately pruned in one-shot. 2023b. Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. Gptq: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint arXiv:2210.17323, 2022. Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. Transformer feed-forward layers are key-value memories. arXiv preprint arXiv:2012.14913, 2020. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020. Quzhe Huang, Mingxu Tao, Zhenwei An, Chen Zhang, Cong Jiang, Zhibin Chen, Zirui Wu, and Yansong Feng. Lawyer llama. https://github.com/AndrewZhe/lawyer-llama, 2023a. Quzhe Huang, Mingxu Tao, Zhenwei An, Chen Zhang, Cong Jiang, Zhibin Chen, Zirui Wu, and Yansong Feng. Lawyer llama technical report. ArXiv, abs/2305.15062, 2023b. Matt Gardner Johannes Welbl, Nelson F. Liu. Crowdsourcing multiple choice science questions. 2017.
MCUvAc1GTg
In Section 4.3 in your data augmentation method you propose to permute the node labels of the perturbed graphs. GNNs should either be permutation equivariant or invariant to node permutation, in either case the permutation of node labels in your perturbed graphs should be inconsequential. Could you therefore please motivate the node permutation in your data augmentation method?
Network Alignment with Transferable Graph Autoencoders Anonymous authors Paper under double-blind review Abstract Network alignment is the task of establishing one-to-one correspondences between the nodes of different graphs and finds a plethora of applications in high-impact domains. However, this task is known to be NP-hard in its general form, and existing algorithms do not scale up as the size of the graphs increases. To tackle both challenges we propose a novel generalized graph autoencoder architecture, designed to extract powerful and robust node embeddings, that are tailored to the alignment task. We prove that the generated embeddings are associated with the eigenvalues and eigenvectors of the graphs and can achieve more accurate alignment compared to classical spectral methods. Our proposed framework also leverages transfer learning and data augmentation to achieve efficient network alignment at a large scale without retraining. Extensive experiments on both network and sub-network alignment with real-world graphs provide corroborating evidence supporting the effectiveness and scalability of the proposed approach. 1 Introduction Network alignment, also known as graph matching, is a classical problem in graph theory, that aims to find node correspondence across different graphs and is vital in a number of high-impact domains (Emmert-Streib et al., 2016). In social networks, for instance, network alignment has been used for user deanonymization (Nilzadeh et al., 2014) and analysis (Ogaard et al., 2013), while in bioinformatics it is a key tool to identify functionalities in protein complexes (Singh et al., 2008), or to identify gene–drug modules (Chen et al., 2018). Graph matching also finds application in computer vision (Conte et al., 2003), sociology (Racz & Sridhar, 2021), or politics (Li et al., 2022), to name a few. Graph matching can be cast as a quadratic assignment problem (QAP), which is in general NP-hard (Koopmans & Beckmann, 1957). Various approaches have been developed to tackle network alignment and can be divided into two main categories; i) optimization algorithms that attempt to approximate the QAP problem by relaxing the combinatorial constraints, ii) embedding methods that approach the problem by implicitly or explicitly generating powerful node embeddings that facilitate the alignment task. Optimization approaches, as (Anstreicher & Brixius, 2001; Vogelstein et al., 2015) employ quadratic programming relaxations, while (Klau, 2009) and (Peng et al., 2010) utilize semidefinite or Lagrangian-based relaxations respectively. Successive convex approximations were also proposed by (Konar & Sidiropoulos, 2020) to handle the QAP. Challenges associated with these methods include high computational cost, infeasible solutions, or nearly optimal initialization requirements. Embedding methods, on the other hand, overcome these challenges, but they usually produce inferior solutions, due to an inherent trade-off between embedding permutation-equivariance and the ability to capture the structural information of the graph. Typical embedding techniques include spectral and factorization methods (Umeyama, 1988; Feizi et al., 2019; Zhang & Tong, 2016; Kanatsoulis & Sidiropoulos, 2022), structural feature engineering methods (Berlingerio et al., 2013; Heimann et al., 2018), and random walk approaches (Perozzi et al., 2014; Grover & Leskovec, 2016a). Recently (Chen et al., 2020; Karakasis et al., 2021) have proposed joint node embedding and network alignment, to overcome these challenges, but these methods do not scale up as the size of the graph increases. Graph Neural Networks (GNNs) are powerful architectures that learn graph representations (embeddings). They have shown state-of-the-art performance in several tasks, including biology (Ganzà et al., 2020; Strokach et al., 2020; Jiang et al., 2021), quantum chemistry (Gilmer et al., 2017), social... networks and recommender systems (Ying et al., 2018; Wu et al., 2020). Recently, Gao et al. (2021a) proposed a GNN approach to match attributed graphs. The method used a joint embedding framework for pairs of graphs and achieved high levels of matching accuracy. However, this method does not scale to large graphs, since training graphs with large sizes is computationally prohibitive. To address these challenges, we propose a novel self-supervised GNN framework to perform network alignment on a large scale. Specifically, we design a generalized transferable graph autoencoder (T-GAE) (shown in Fig. 1), to produce permutation equivariant and highly expressive embeddings, overcoming the challenges of other embedding techniques. T-GAE is trained on multiple graphs and learns node representations which are tailored to perform alignment between nodes of different graphs. The T-GAE representations combine the eigenvectors of the graph in a nonlinear fashion and are provably at least as good in network alignment as certain spectral methods. Additionally, the proposed framework leverages transfer learning and data augmentation to efficiently operate with large graphs. Training is performed with small graphs, in a self-supervised manner, and the trained encoder can be executed on large graphs to tackle network alignment at a large scale. Extensive experiments with real-world benchmarks test the effectiveness and limits of the proposed T-GAE approach in the tasks of graph and sub-graph matching. The experimental results provide corroborating evidence that T-GAE offers an elegant framework for large-scale network alignment. Our contributions are summarized as follows: (C1) We propose T-GAE, a generalized graph autoencoder architecture that can be trained with multiple graphs and produce expressive/permutation equivariant representations, tailored to network alignment. (C2) We draw the connection between T-GAE and spectral methods and prove that T-GAE is at least as good in graph matching as the absolute value of the graph eigenvectors. (C3) We leverage data augmentation and transfer learning, to develop a robust framework that efficiently performs network alignment at a large scale. (C4) We demonstrate the effectiveness and scalability of the proposed T-GAE with real-world, benchmark graphs in challenging graph and sub-graph matching settings. 2 PRELIMINARIES Graphs are represented by \( G := (\mathcal{V}, \mathcal{E}) \), where \( \mathcal{V} = \{1, \ldots, N\} \) is the set of vertices (nodes) and \( \mathcal{E} = \{(v, u)\} \) correspond to edges between pairs of vertices. A graph is represented in a matrix form by a graph operator $S \in \mathbb{R}^{N \times N}$, where $S(i, j)$ quantifies the relation between node $i$ and node $j$ and $N = |V|$ is the total number of vertices. In this work, we use the graph adjacency and normalized graph adjacency. Oftentimes, the nodes of the graph are associated with graph signals or node attributes $X \in \mathbb{R}^{N \times D}$, that encode additional information about the nodes. In this paper, we study both network alignment of graphs with or without attributes. ### 2.1 Network Alignment **Definition 1 (Network Alignment).** Given a pair of graphs $\mathcal{G} := (\mathcal{V}, \mathcal{E})$, $\hat{\mathcal{G}} := (\hat{\mathcal{V}}, \hat{\mathcal{E}})$, with graph adjacencies $S$, $\hat{S}$, network alignment aims to find a bijection $g : \mathcal{V} \rightarrow \hat{\mathcal{V}}$ which minimizes the number of edge disagreements between the two graphs. Formally, the problem can be written as: $$\min_{P \in \mathcal{P}} \| S - P \hat{S} P^T \|_F^2,$$ where $\mathcal{P}$ is the set of permutation matrices. As mentioned in the introduction, network alignment, is equivalent to the QAP, which has been proven to be NP-hard (Koopmans & Beckmann [1957]). ### 2.2 Spectral Decomposition of the Graph A popular approach to tackle network alignment is by learning powerful node embeddings associated with connectivity information in the graph. Network alignment can be achieved by matching the node embeddings of different graphs rather than graph adjacencies, as follows: $$\min_{P \in \mathcal{P}} \| E - P \hat{E} \|_F^2,$$ where $E \in \mathbb{R}^{N \times F}$ is embedding matrix and $E[i, :]$ is the vector representation of node $i$. The optimization problem in (2) is a linear assignment problem and can be optimally solved in $O(N^3)$ by the Hungarian method (Kuhn [1955b]). Simpler sub-optimal alternatives also exist that operate with $O(N^2)$ or $O(N \log(N))$ flops. A question that naturally arises is how to generate powerful node embeddings that capture the network connectivity and also be effective in aligning different graphs. A natural and effective approach is to leverage the spectral decomposition of the graph, $S = V \Lambda V^T$, where $V$ is the orthonormal matrix of the eigenvectors, and $\Lambda$ is the diagonal matrix of corresponding eigenvalues. Note that we assume undirected graphs and thus $S$ is symmetric. Spectral decomposition has been proven to be an efficient approach to generating meaningful node embedding for graph matching (Umeyama [1988] Feizi et al. [2019]). In particular, $E = V$ or $E = V \Lambda$ are node embeddings that capture the network connectivity since they can perfectly reconstruct the graph. However, $V$ is not unique. Thus computing the spectral decomposition of the same graph with node relabelling, $\hat{S} = P S P^T$ is not guaranteed to produce a permuted version of $V$, i.e., $P V$. Even in the case where $S$ does not have repeated eigenvalues $V$ is only unique up to column sign, which prevents effective matching. To overcome the aforementioned uniqueness limitation, one can focus on the top $m$ eigenvectors that correspond to non-repeated eigenvalues in both $S$ and $\hat{S}$ and compute their absolute values. Then network alignment can be cast as: $$\min_{P \in \mathcal{P}} \| |V_m| - P |V_m| \|_F^2,$$ where $V_m \in \mathbb{R}^{N \times m}$ corresponds to the subspace of non-repeated eigenvalues. The formulation in (3) is a similar to the problem solved in (Umeyama [1988]). ### 3 Graph Neural Networks (GNNs) Upper-Bounds Spectral Methods for Network Alignment A GNN is a cascade of layers and performs local, message-passing operations that are usually defined by the following recursive equation: $$x_v^{(l+1)} = g(x_v^{(l)}, f(\{x_u^{(l)} : u \in \mathcal{N}(v)\})),$$ (4) where \( N(v) \) is the neighborhood of vertex \( v \), i.e., \( u \in N(v) \) iff \((u, v) \in E\). The function \( f \) operates on multisets (\(\{\cdot\}\) represents a multiset) and \( f, g \) are ideally injective. Common choices for \( f \) are the summation or mean function, and for \( g \) the linear function, or the multi-layer perceptron (MLP). Overall, the output of the \( L \)-th layer of a GNN is a function \( \phi(X; S, H) : \mathbb{R}^{N \times D} \rightarrow \mathbb{R}^{N \times D_L} \), where \( S \) is the graph operator, and \( H \) is the tensor of the trainable parameters in all \( L \) layers and produces \( D_L \)-dimensional embeddings for the nodes of the graph defined by \( S \). GNNs admit some very valuable properties. First, they are permutation equivariant: **Theorem 3.1** ([Xu et al., 2019b], [Maron et al., 2018]). Let \( \phi(X; S, H) : \mathbb{R}^{N \times D} \rightarrow \mathbb{R}^{N \times D_L} \) be a GNN with parameters \( H \). For \( X = PX \) and \( S = PSP^T \) that correspond to node relabelling according to the permutation matrix \( P \), the output of the GNN takes the form: \[ \tilde{X}^{(L)} = \phi(\tilde{X}; \tilde{S}, H) = P \phi(X; S, H) \] The above property is not satisfied by other spectral methods. GNNs are also stable ([Gama et al., 2020]), transferable ([Ruiz et al., 2020]), and have high expressive power ([Xu et al., 2019b], [Abboud et al., 2021], [Kanatsoulis & Ribeiro, 2022]). ### 3.1 GNNs and Network Alignment To characterize the ability of a GNN to perform network alignment we first point out the GNNs perform nonlinear spectral operations. Details can be found in Appendix B. We can prove that: **Theorem 3.2.** Let \( G, \hat{G} \) be graphs with adjacencies \( S, \hat{S} \) that have non-repeated eigenvalues. Also let \( P^\circ, P^\dagger \) be solutions to the optimization problems in (1) and (3), respectively. Then there exists a GNN \( \phi(X; S, H) : \mathbb{R}^{N \times D} \rightarrow \mathbb{R}^{N \times D_L} \) such that: \[ \| S - P^\circ \hat{S} P^{\circ T} \|_F^2 \leq \| S - P^* \hat{S} P^{* T} \|_F^2 \leq \| S - P^\dagger \hat{S} P^{\dagger T} \|_F^2, \] with \[ P^* = \arg \min_{P \in P} \| \phi(X; S, H) - P \phi(\tilde{X}; \tilde{S}, H) \|_F^2 \] The proof can be found in Appendix C. The assumption that the graph adjacencies have different eigenvalues is not restrictive. Real nonisomorphic graphs have different eigenvalues with very high probability ([Haemers & Spence, 2004]). Theorem 3.2 compares the network alignment power of a GNN with that of a spectral algorithm ([Umeyama, 1988]), that uses the absolute values of graph adjacency eigenvectors to match two different graphs. According to Theorem 3.2, there always exists a GNN that can perform at least as well as the spectral approach. The proof studies a GNN with white random input and measures the variance of the filter output. Then it shows that GNN layers are able to compute the absolute values of the graph adjacency eigenvectors when the adjacency has non-repeated eigenvalues. As a result there always exists a single layer GNN that outputs the same node features as the ones used in [Umeyama, 1988], which concludes our proof. ### 4 Proposed Method We now leverage the favorable properties of GNNs (permutation equivariance, expressivity, and transferability) and design a GNN approach to tackle network alignment at a large-scale. Our approach learns low-dimensional node embeddings (Eq. 4) that enable graph matching via solving the linear assignment in (2) rather than a quadratic assignment problem in (1). In this section, we design a robust GNN framework such that the node embeddings are expressive to accurately match similar nodes and also stable to graph perturbations, so that they yield high-quality network alignment. #### 4.1 Learning Geometry Preserving Embeddings A fundamental property of node embeddings is to preserve the geometry and topological characteristics of the network. This will allow expressive node representations that can effectively approximate the original problem in (1) with the problem in (2). To achieve this goal we leverage an auto-encoder architecture that reconstructs the original graph from the node embeddings. Results on GNN expressivity indicate that this reconstruction is doable under specific conditions (Abboud et al., 2021). To build topology-preserving embeddings we solve the following optimization problem: $$\min_{\mathcal{H}} \ l \left( \rho \left( \phi (X; S, H) \phi (X; S, H)^T \right), S \right),$$ where $l(\cdot)$ is the binary cross entropy (BCE) and $\rho(\cdot)$ is the logistic function. 4.2 LARGE-SCALE NODE REPRESENTATION LEARNING WITH GENERALIZED GRAPH AUTO-ENCODERS The goal of the proposed framework is to learn a function that maps graphs to node representations and effectively match nodes from different graphs. This function is modeled by a GNN encoder $\phi (X; S, H)$, where each layer is described by Eq. 4. The learned encoder should work for a family of training graphs $\{G_0, \ldots, G_t, \ldots, G_I\}$ with a set of adjacency matrices $S = \{S_0, \ldots, S_t, \ldots, S_I\}$, rather than a single graph. So the idea is not to train an auto-encoder on a single graph but train a generalized graph auto-encoder by solving the following optimization problem. $$\min_{\mathcal{H}} \ E \left[ l \left( \rho \left( \phi (X; S_i, H) \phi (X; S_i, H)^T \right), S_i \right) \right],$$ where $S_i \in S$ is a realization from a family of graphs and the expectation (empirical expectation is practice) is computed over this graph family. The generalized framework in (9) learns a mapping from graphs to node representations, and can be applied to out-of-distribution graphs that have not been observed during training. This twist in the architecture enables node embedding and graph matching for large-scale graphs, where training is computationally prohibitive. 4.3 ROBUST AND GENERALIZABLE NODE REPRESENTATIONS WITH SELF-SUPERVISED LEARNING (DATA AUGMENTATION) So far we proposed a convolutional framework to produce expressive node representations that are tailored to perform network alignment. In this subsection, we further upgrade our framework by ensuring the robustness and generalization ability of the proposed GNN mapping. In particular, for each graph, $S_i \in S$, we augment the training set with perturbed versions that are described by the following set of graph adjacencies $M_i = \{S_i^{(0)}, \ldots, S_i^{(j)}, \ldots, S_i^{(J)}\}$, that are perturbed versions of $S_i$. To do so we add or remove an edge with a certain probability yielding $\tilde{S}_i \in M$, such that $\tilde{S}_i = S_i + M_i$, where $M_i \in \{-1, 0, 1\}^{N \times N}$. Note that $M$ changes for each $\tilde{S}_i$, and $M[m,n]$ can be equal to 1 and −1 only if $S[m,n]$ is equal to 0 and 1 respectively. To train the proposed generalized graph-autoencoder we consider the following optimization problem: $$\min_{\mathcal{H}} \ E_S \left[ E_{M_i} \left[ l \left( \rho \left( \phi (X; \tilde{S}_i, H) \phi (X; \tilde{S}_i, H)^T \right), S_i \right) \right] \right],$$ where $E_S$ is the expectation with respect to the family of graphs $S$ and $E_{M_i}$ is the expectation with respect to the perturbed graphs $M_i$. In practice, $E_S, E_M$ correspond to empirical expectations. Note that training according to (10) also benefits the robustness of the model, which is crucial in deep learning tasks (Wang et al., 2022). A schematic illustration of the training process can be found in Fig. 1. Remark 4.1. (Large-scale network alignment by transference) The proposed framework learns a mapping $\phi : G \rightarrow \mathbb{R}^{N \times F}$ that produces expressive and robust node representations for a family of graphs $G \in \mathcal{G}$. This mapping is designed in such a way that the problem in (2) approximates the problem in (1) and allows solving network alignment in polynomial time. One of the main benefits of the proposed framework is that it enables large-scale network alignment. The transferability analysis of GNN encoders (Ruiz et al., 2020), suggests that we can train with small graphs and efficiently execute with much larger graphs when the substructures (motifs) that appear in the tested graphs, were also partially observed during training. Since the proposed generalized graph auto-encoder is trained with multiple graphs, a variety of motifs are observed during training, which cannot be observed with a classical graph autoencoder, and the proposed GNN encoder can be transferred to large-scale graphs. | Task | Dataset | \(|\mathcal{V}|\) | \(|\mathcal{E}|\) | # Aligned Edges | Network Type | |--------------------|--------------------------|-------------------|------------------|-----------------|-----------------------| | Graph Matching | C. elegans [Kunegis et al., 2013] | 453 | 2,025 | 2,025 | Interactome | | | Arenas [Leskovec & Kleinberg, 2014] | 1,135 | 3,982 | 3,982 | Email Communication | | | Douban [Zhang & Tong, 2016] | 3,906 | 7,215 | 7,215 | Social Network | | | Cora [Sen et al., 2008] | 2,708 | 5,278 | 5,278 | Citation Network | | | Ogbj [Fan et al., 2016] | 17,718 | 52,867 | 52,867 | Citation Network | | | Coauthor CS [Chen et al., 2018] | 18,533 | 81,894 | 81,894 | Coauthor Network | | Subgraph Matching | ACM-DBLP [Zhang & Tong, 2019] | 9,872 | 39,561 | 6,352 | Citation Network | | | Douban Online-Offline [Zhang & Tong, 2016] | 3,906 | 1,632 | 1,118 | Social Network | Table 2: Summary of Dataset statistics ### 4.4 ALIGNMENT AND COMPLEXITY ANALYSIS After learning the powerful T-GAE node embeddings, network alignment is performed by solving the linear assignment problem in \(O(n^3)\). An illustration of the assignment is presented in Fig. 2. The node features produced by T-GAE are used to calculate a pairwise distance matrix, followed by the greedy Hungarian algorithm to predict node correspondences. To analyze the complexity of our approach we study the 3 main parts of T-GAE: a) The design of the input structural features, b) The message-passing GNN that produces node embeddings, and c) the linear assignment algorithm. The computation of our neighborhood-based structural features is expected to take \(O(|\mathcal{V}|)\) in real graphs, as proved in Henderson et al. (2011). The computational and memory complexity of the message-passing GNN is \(O(|\mathcal{V}|c^2 + |\mathcal{E}|c)\), and \(O(|\mathcal{V}|c)\), where \(c\) is the width of the GNN. The computational complexity to align the nodes of the graph is \(O(|\mathcal{V}|^2)\) since we are using the suboptimal greedy Hungarian. If we want to optimally solve the linear assignment problem we need to use the Hungarian algorithm that has \(O(|\mathcal{V}|^3)\) complexity. If we want to process large graphs we can embed the nodes in 1-dimensional space and use a sorting algorithm with complexity \(O(|\mathcal{V}| \log(|\mathcal{V}|))\) to perform linear assignment. Overall the complexity of T-GAE is \(O(|\mathcal{V}|^2)\), or \(O(|\mathcal{V}|c^2 + |\mathcal{E}|c + |\mathcal{V}| \log(|\mathcal{V}|))\) for large graphs. ### 5 EXPERIMENTS In this section, we evaluate the performance of the proposed framework on both graph and sub-graph alignment with various benchmark networks. We compare against several baselines and assess the performance of the competing methods in terms of matching accuracy, hit-rate, and runtime. #### 5.1 DATASETS AND BASELINES Table 2 provides a brief overview of the considered networks. Our comparisons are conducted with 3 categories of baseline methods: (a) **GNN based methods**: WAlign (Gao et al., 2021b), GAE and VGAE (Kipf & Welling, 2016a); (b) **Graph/Node embedding techniques**: NetSimile (Berlingerio et al., 2013), Spectral (Umeyama, 1988), DeepWalk (Perozzi et al., 2014), Grover & Leskovec (2016b), GraphWave (Donnat et al., 2018) and LINE (Tang et al., 2015). (c) **Optimization based graph matching algorithms**: S-GWL (Xu et al., 2019a), ConeAlign (Chen et al., 2020) and FINAL (Zhang & Tong, 2016). Note that LINE, VGAE, DeepWalk, and Node2Vec are omitted from some experiments since they show very poor performance. The reason behind that is that they are not permutation equivariant. GraphWave is also excluded from the sub-graph matching experiment, it could not identify correlated nodes in two different graphs. In the case of graphs without attributes FINAL is equivalent to the popular Isorank (Singh et al., 2008) algorithm. FINAL is omitted in sub-graph matching experiments due to weak performance. 5.2 Model Details For graph matching experiments, we consider graphs without node attributes, and design the input to the GNN models, using 7 structural features proposed in (Berlingerio et al., 2013). The features include the degree of each node, the local and average clustering coefficient, and the number of edges, outgoing edges, and neighbors in each node’s egonet. This input feature is applied for all GNN-based methods. As a result, the performance of NetSimile, vanilla GAE and WAlign provide measures to assess the benefit of using T-GAE for node embedding. As illustrated in Figure 1, the structure of our proposed encoder consists of two MLPs and a series of GNN layers. The node features are processed by a 2-layer MLP and passed to all the GNN layers. We add skip connections between this MLP layer and all the subsequent GNN layers. The outputs of all GNN layers are concatenated and passed to another 2-layer MLP, followed by a linear decoder to generate the reconstructed graph. The model is optimized end to end by equation 10. We test the performance of the proposed T-GAE framework by experimenting on three kinds of message-passing mechanisms on graphs, i.e., GCN (Kipf & Welling, 2016b), GIN (Xu et al., 2019b) and GNNc (described in Equation 11). These mechanisms correspond to different functions \( f \) and \( g \) in Equation 4. We report the performance of GIN in the main body and the others in Appendix G. 5.3 Graph Matching Experiments To test the performance of the competing methods, we first attempt to match the graphs of Table 2 with permuted and perturbed versions of them. In particular, let \( G \) be a graph of Table 2 with adjacency matrix \( S \). For each graph we produce 10 permuted-perturbed versions according to \( \hat{S} = P(S + M)P^T \), where \( M \in \{-1, 0, 1\}^{N \times N} \) and \( P \) is a permutation matrix. For each perturbation level \( p \in \{0, 1\%, 5\% \} \), the total number of perturbations is defined as \( p|E| \), where \( |E| \) is the number of edges of the original graph. Then every edge and non-edge share the same probability of being removed or added. We also conducted experiments by removing edges according to the degrees of its vertices. Results for that model are discussed in Appendix H. 5.3.1 Transferability Analysis We first test the ability of T-GAE to perform large-scale network alignment and transfer across different datasets. To this end, we train T-GAE according to (9), where \( S \) consist of the small-size networks, i.e., Celegans, Arena, Douban, and Cora. Then we resort to transfer learning and use the T-GAE encoder to produce node embedding on (a) perturbed versions of Celegans, Arena, Douban, and Cora, and (b) larger graphs, i.e., Dblp, and Coauthor CS. Note that neither the larger graphs, nor the perturbed versions of the small graphs were considered during training. This is in contrast with all competing baselines that are retrained on every testing graph pair. The average and standard deviation of the matching accuracy for 10 randomly generated perturbation samples are presented in Table 3. Our first observation is that for zero perturbation most algorithms are able to achieve a high level of matching accuracy. This is expected, since for zero perturbation the network alignment is equivalent to graph isomorphism. Furthermore, there is a clear benefit of processing the NetSimile embeddings with GNNs since they offer up to 22% performance increase. When some perturbation is added, the conclusions are straightforward. Our proposed T-GAE markedly outperforms all the competing alternatives and shows the desired robustness to efficiently perform network alignment at 1% perturbation level, and its performance is consistent across all datasets and perturbation levels. Regarding the ability of T-GAE to perform large-scale network alignment the results are definitive. T-GAE enables low-complexity training with small graphs, and execution at larger settings by leveraging transfer learning. In particular, it is able to achieve very high levels of matching accuracy for both Dblp and Coauthor CS, for \( p = 0\%, 0.1\% \). To the best of our knowledge, this is the first attempt that performs exact alignment on a network at the order of 20k nodes and 80k edges. Comparing T-GAE with vanilla GAE, we observe that GAE is not robust to noise or transferable. This highlights the benefit of T-GAE in handling the distribution shift brought by the structural dissimilarity between different graphs. We also notice that S-GWL completely fails the Arenas graph. This happens because Arenas has isolated nodes, and S-GWL struggles in handling such graphs. To see this, we also test S-GWL on the Arenas graph after removing all the isolated nodes and it achieves 94.6 ± 0.5% matching accuracy at 0 perturbation, 28.7 ± 43.7% matching accuracy. Table 3: Graph matching accuracy on 10 randomly perturbed samples under different levels of edge editing. The proposed T-GAE is trained on the clean C elegans, Arena, Douban, and Cora networks, and tested on noisy versions of them and the larger Db lp, and Coauthor CS. We test 3 different message-passing mechanisms for the layers of T-GAE as annotated in the table. Accuracy above 80% is highlighted in green, 40% to 80% accuracy is in yellow, and performance below 40% is in red. at 1% perturbation and 37.4 ± 45.8% accuracy at 5% perturbation. The performance of S-GWL is also unstable in different noise levels, as removing edges may result in graphs with isolated nodes. Detailed runtime comparisons between T-GAE and all competing methods are presented in Appendix E. 5.3.2 PERTURBED TRAINING In the previous experiment, T-GAE is trained with a family of original graphs and tested to match perturbed versions of a larger family of graphs. T-GAE exhibited more robust performance compared to the baseline methods, however, its matching accuracy dropped significantly as the perturbation level of testing data increased. To tackle this problem we follow a self-supervised learning approach and train T-GAE with a family of real graphs and perturbations of them. We train according to (10), which aims to produce similar node embeddings for both the original graphs and perturbed versions of them. The data augmentation process follows the previously explained perturbation models. Similar to the previous experiment we train over the four small datasets and execute over all datasets. Note that training and testing is performed with different perturbations of the original graphs. Table 4 reports the testing results of the best T-GAE when training is performed with graph perturbations. | Algorithm | C elegans | Arenas | Douban | Cora | Db lp | Coauthor CS | |-----------|-----------|--------|--------|------|-------|-------------| | % perturbation | | | | | | | | T-GAE | 89.5 ± 1.3 | 88.4 ± 0.5 | 90.3 ± 0.4 | 87.4 ± 0.4 | 85.6 ± 0.1 | 97.6 ± 0.1 | | T-GAE with pert. | 89.7 ± 1.5 | 88.6 ± 0.6 | 90.1 ± 0.4 | 87.4 ± 0.5 | 85.7 ± 0.2 | 97.7 ± 0.1 | | 1% perturbation | | | | | | | | T-GAE | 84.1 ± 1.1 | 84.8 ± 0.6 | 84.9 ± 0.6 | 82.9 ± 0.5 | 79.1 ± 0.4 | 86.5 ± 0.8 | | T-GAE with pert. | 83.4 ± 1.6 | 85.6 ± 0.5 | 85.2 ± 0.6 | 83.2 ± 0.9 | 79.7 ± 0.4 | 87.1 ± 1.0 | | 5% perturbation | | | | | | | | T-GAE | 50.8 ± 3.3 | 47.1 ± 5.6 | 57.9 ± 6.1 | 58.2 ± 2.0 | 40.8 ± 2.1 | 26.9 ± 5.4 | | T-GAE with pert. | 52.3 ± 5.4 | 62.6 ± 2.2 | 58.5 ± 5.0 | 57.4 ± 2.7 | 43.6 ± 3.7 | 30.4 ± 7.4 | Table 4: Performance Comparison of T-GAE when trained with/without perturbation. We observe that incorporating graph perturbations in the training process significantly helps in high perturbation levels and benefits the robustness of the proposed method. On the other hand, when testing on low levels of perturbations, using the original graph or perturbations to train the T-GAE does not lead to significant changes. In particular, at 5% testing perturbation, T-GAE archives 15.5% increase in the Arenas dataset, whereas at 0% and 1% testing perturbation the increase is 0.2% and 0.8% respectively. 5.4 Sub-graph Matching Experiments In this subsection, we test the performance of T-GAE in matching subgraphs of different networks that have aligned nodes (nodes that represent the same entities in different networks). For example, in ACM-DBLP data set, the task is to find and match the papers that appear in both citation networks, whereas in social networks like Douban Online-Offline, we aim to identify the users that take part into both online and offline activities. To this end, we test the performance of the proposed T-GAE framework on these datasets. We compare two different approaches. In the first, T-GAE is trained according to (9) to produce embedding for the graph pair we aim to match, i.e., the ACM-DBLP pair, or the Douban Online-Offline pair. In the second, T-GAE is trained according to (9) with Celegans, Arena, Douban, and Cora, and transfer learning is used to match the targeted graph pair. To assess the performance of the competing algorithms we measure the hit rate (Järvelin & Kekäläinen [2000]). The results are presented in Fig. 3. The execution time for the reported results is presented in Appendix F. We observe a significant improvement in matching accuracy with GNN-based methods compared to traditional graph or node embedding techniques. These results demonstrate the ability of GNNs to generate expressive and robust node embeddings compared to classical algorithms. In particular, our proposed framework, T-GAE, consistently achieves the best performance among all competing methods. This suggests that the training framework (10), illustrated in Fig. 1 provides an efficient approach to network alignment. It is also notable, that T-GAE works well with both types of graph convolutions (GIN, GCN). This result indicates that the proposed framework has the potential to be extended to different types of neural networks. Limitations: Although our approach achieves state-of-the-art performance in aligning real-graphs, approaching network alignment with a learning method, remains a heuristic and does not offer optimality guarantees. Furthermore, in order to process large graphs we cast network alignment as a self-supervised task. As a result in small-scale settings where the task can be tackled with computationally intensive efficient methods, our algorithm is not expected to perform the best. Finally, for large graphs the complexity of T-GAE $O(|V|^2)$ is limiting and therefore our alternative method with complexity $O(|V|c^2 + |E|c + |V|\log(|V|))$ has to be employed. 6 Conclusion We proposed T-GAE, a generalized transferable graph autoencoder to perform network alignment on a large scale. T-GAE can be trained with multiple graphs and produce robust and permutation equivariant embeddings tailored to network alignment. The produced embeddings are related to the spectral decomposition of the graph and are at least as good in graph matching as certain spectral methods. The proposed approach leverages transfer learning and data augmentation and achieves high levels of matching accuracy for graphs with more than 15,000 nodes. Experiments with real-world benchmarks on both graph matching and subgraph matching tasks demonstrated the effectiveness and limits of the proposed approach. REFERENCES Ralph Abboud, Ismail Ilkan Ceylan, Martin Grohe, and Thomas Lukasiewicz. The surprising power of graph neural networks with random node initialization. In IJCAI, 2021. Kurt M Anstreicher and Nathan W Brixius. Solving quadratic assignment problems using convex quadratic programming relaxations. Optimization Methods and Software, 16(1-4):49–68, 2001. Michele Berlingerio, Danai Koutra, Tina Eliassi-Rad, and Christos Faloutsos. Network similarity via multiple social theories. In Proceedings of the 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp. 1439–1440, 2013. Jiazhou Chen, Hong Peng, Guoqiang Han, Hongmin Cai, and Jiulun Cai. HOGMMNC: a higher order graph matching with multiple network constraints model for gene–drug regulatory modules identification. Bioinformatics, 35(4):602–610, 07 2018. ISSN 1367-4803. doi: 10.1093/bioinformatics/bty662. URL https://doi.org/10.1093/bioinformatics/bty662 Xiyuan Chen, Mark Heimann, Fatemeh Vahedian, and Danai Koutra. Cone-align: Consistent network alignment with proximity-preserving node embedding. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 1985–1988, 2020. D. Conte, P. Foggia, C. Sansone, and M. Vento. Graph matching applications in pattern recognition and image processing. In Proceedings 2003 International Conference on Image Processing (Cat. No.03CH37429), volume 2, pp. II–21, 2003. doi: 10.1109/ICIP.2003.1246606. Jian Ding, Zongming Ma, Yihong Wu, and Jiaming Xu. Efficient random graph matching via degree profiles, 2020. Claire Donnat, Marinka Zitnik, David Hallac, and Jure Leskovec. Learning structural node embeddings via diffusion wavelets. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, jul 2018. doi: 10.1145/3219819.3220025. URL https://doi.org/10.1145%2F3219819.3220025 Frank Emmert-Streib, Matthias Dehmer, and Yongtang Shi. Fifty years of graph matching, network alignment and network comparison. Information sciences, 346:180–197, 2016. Soheil Feizi, Gerald Quon, Mariana Recamonde-Mendoza, Muriel Medard, Manolis Kellis, and Ali Jadbabaie. Spectral alignment of graphs. IEEE Transactions on Network Science and Engineering, 7(3):1182–1197, 2019. P. Gainza, F. Sverrisson, F. Monti, E. Rodolà, D. Boscaini, M. M. Bronstein, and B. E. Correia. Deciphering interaction fingerprints from protein molecular surfaces using geometric deep learning. Nature Methods, 17(2):184–192, February 2020. Fernando Gama, Joan Bruna, and Alejandro Ribeiro. Stability properties of graph neural networks. IEEE Transactions on Signal Processing, 68:5680–5695, 2020. Ji Gao, Xiao Huang, and Jundong Li. Unsupervised graph alignment with wasserstein distance discriminator. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 426–435, 2021a. Ji Gao, Xiao Huang, and Jundong Li. Unsupervised graph alignment with wasserstein distance discriminator. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, KDD ’21, pp. 426–435, New York, NY, USA, 2021b. Association for Computing Machinery. ISBN 9781450383325. doi: 10.1145/3447548.3467332. URL https://doi.org/10.1145/3447548.3467332 Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International conference on machine learning, pp. 1263–1272. PMLR, 2017. Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 855–864, 2016a.
UPvufoBAIs
As mentioned in the article, an observation is that global information is noisy, but some local details are robust. I hope there is rigorous explanation and quantitative analysis here to support this hypothesis.
Source-Free and Image-Only Unsupervised Domain Adaptation for Category Level Object Pose Estimation Prakhar Kaushik Aayush Mishra Adam Kortylewski† Alan Yuille Johns Hopkins University †University of Freiburg and Max-Planck-Institute for Informatics {pkaushil,amishr24,ayuille1}@jh.edu †akortyle@mpi-inf.mpg.de Abstract We consider the problem of source-free unsupervised category-level pose estimation with only images to a target domain without any access to source domain data or 3D annotations during adaptation. Collecting and annotating real-world 3D data and corresponding images is laborious, expensive, yet unavoidable process, since even 3D pose domain adaptation methods require 3D data in the target domain. We introduce 3DUDA, a method capable of adapting to a nuisance ridden target domain without 3D or depth data. Our key insight stems from the observation that specific object subparts remain stable across out-of-domain (OOD) scenarios, enabling strategic utilization of these invariant subcomponents for effective model updates. We represent object categories as simple cuboid meshes, and harness a generative model of neural feature activations modeled at each mesh vertex learnt using differential rendering. We focus on individual locally robust mesh vertex features and iteratively update them based on their proximity to corresponding features in the target domain even when the global pose is not correct. Our model is then trained in an EM fashion, alternating between updating the vertex features and the feature extractor. We show that our method simulates fine-tuning on a global pseudo-labeled dataset under mild assumptions, which converges to the target domain asymptotically. Through extensive empirical validation, including a complex extreme UDA setup which combines real nuisances, synthetic noise, and occlusion, we demonstrate the potency of our simple approach in addressing the domain shift challenge and significantly improving pose estimation accuracy. 1 Introduction In recent years, object pose estimation has witnessed remarkable progress, revolutionizing applications ranging from robotics (Du et al., 2019; Wang et al., 2019a; Wong et al., 2017; Zeng et al., 2017) and augmented reality (Marchand et al., 2016; Marder-Eppstein, 2016; Runz et al., 2018) to human-computer interaction. There have been works for 3D and 6D pose estimation that have focused primarily on instance-level (He et al., 2021; 2020; Park et al., 2019; Peng et al., 2019; Tremblay et al., 2018; Wang et al., 2019a; Xiang et al., 2018) pose estimation methods. However, these methods require object-specific 3D CAD models or instance-specific depth information and are often unable to estimate object pose without given instance-specific 3D priors. Category-level methods (Chen et al., 2020a; Chen & Dou, 2021; Lin et al., 2021; Tian et al., 2020; Wang et al., 2019b; 2021b) are more efficient, but often still require some 3D information, such as the ground truth depth map (Wang et al., 2019b; Lin et al., 2021; Lee et al., 2022) or point clouds (Lee et al., 2023). Acquiring such labeled 3D data across different domains is often a formidable challenge, impeding the performance of these models when deployed in real-world scenarios. Few recent attempts (Lee et al., 2022; 2023) to perform category level pose estimation in a semi-supervised manner also require a ground truth depth map (Lee et al., 2023) or point cloud (Lee et al., 2022) for every instance. In this paper, we focus on ameliorating the aforementioned drawbacks in UDA for 3D pose estimation. We design a model that is capable of adapting to a target domain in an unsupervised manner without requiring any kind of 3D data and using only RGB images in the target domain. Our Figure 1: Our method utilizes two key observations—(a) **Local Pose Ambiguity**, i.e., the inherent pose ambiguity that occurs when we can only see a part of the object. We utilize this ambiguity to update the local vertex features which roughly correspond to object parts, even when the global pose of the object may be incorrectly estimated. (b) **Local Part Robustness** refers to the fact that certain parts (e.g., headlights in a car) are less affected in OOD data, which is verified by the (azimuth) polar histogram representing the percentage of robustly detected vertex features per image in target domain (OOD-CV \cite{zhao2023oodcv}) using the source model (*Before Adaptation*). Even before adaptation, there are a few vertices which can be detected robustly and therefore are leveraged by our method to adapt to the target domain as seen by the increased robust vertex ratio *After Adaptation*. Our source model is based on the idea of generative modeling of neural network features \cite{kortylewski2020generative,wang2021unsupervised,wang2023unsupervised,ma2022unsupervised} that have been used to perform category-level 3D and 6D pose estimation. However, all of these methods are fully supervised and cannot be trivially adapted to an OOD target. We extend these neural feature-level render-and-compare methods’ capabilities for source-free unsupervised learning, which can be utilized in real-world OOD scenarios. Our method, 3DUDA, is based on the observation that certain object subparts remain stable and invariant across out-of-domain (OOD) scenarios, as seen in Figure 1, thereby offering a robust foundation for model updates. We utilize this ensemble of less modified object local subparts and their inherent pose ambiguity in the nuisance-ridden target domain images to adapt the source model. This allows us to ignore noise-ridden global object pose and still obtain relevant information from more robust local sub-components of the object. We focus on individual mesh vertex features, iteratively updating them based on their proximity to the corresponding features in the target domain. Our experiments show that this simple idea allows us to perform robust pose estimation in OOD scenarios with only images from the target domain. In summary, we make several important contributions in this paper. 1. We introduce 3DUDA - which is (in our knowledge) the first method to do **image only, source free unsupervised domain adaptation for category level 3D pose estimation**. 2. 3DUDA utilizes local pose ambiguity and their relative robustness to adapt to nuisance ridden domains without access to any 3D or synthetic data. We present theoretical analysis for this insight, which motivates our method. 3. We evaluate our model on real world nuisances like shape, texture, occlusion, etc. as well as image corruptions and show that our model is able to adapt robustly in such scenarios. Our method performs exceedingly well in **extreme UDA** setups where multiple nuisance factors such as real-world nuisance, synthetic noise, and partial occlusion are combined. ## Related Works **Category-level 3D pose estimation** is a task for estimating the 3D pose of unseen instances but in a known category. Current pose estimation approaches can be divided into keypoint approaches, which utilize semantic keypoints on 3D objects to predict 3D pose \cite{pavlakos2017spine,zhou2018tetrasphere} and render and compare methods which predict pose by fitting a 3D rigid transformation at the image level \cite{chen2020posecnn,wang2019posecnn} or feature level \cite{wang2021unsupervised}. **Feature-level render-and-compare** methods predict 3D pose by minimizing reconstruction error between predicted and rendered (e.g., from a 3D mesh and a corresponding pose) object representations. Such optimization often helps in avoiding complex loss landscapes that arise by doing... Figure 2: Overview of Our Method (3DUDA) (a) We extract neural features from source model CNN backbone $f_i = \phi_w(X_T)$ and render feature maps from the source mesh model ($M_S$) using vertex features $C_T$. The pose estimate is optimized using render-and-compare. (b) For this incorrectly estimated global pose, we measure similarity of every individual visible vertex feature with the corresponding image feature vector in $f_i$, independently (Equation 3) and update individual vertex features using average feature vector values for a batch of images (Equation 4). (c) The mesh model is then updated using these changed vertices and the backbone is optimized using the optimized neural mesh. render-and-compare at the image pixel level. (Wang et al., 2019b) predict the pose of the object by solving a rigid transformation between the 3D model $M$ and the NOCS maps with the Umeyama algorithm (Pavlakos et al., 2017b). (Iwase et al., 2021) learned features using differentiable Levenberg-Marquardt optimization, whereas (Wang et al., 2021a; Ma et al., 2022) learned contrastive features for the 3D model $M$ and utilized a similar render-and-compare setup. Unsupervised Domain Adaptation for 3D pose estimation Unfortunately, all the methods mentioned above are fully supervised. However, there are some semi-supervised methods like (Fu & Wang, 2022; Peng et al., 2022) that often require labeled target domain image and 3D data to work. Even methods like (Lee et al., 2022; 2023) require instance depth data, point cloud and segmentation labels during test-time inference. Other methods like (Yang et al., 2023) create synthetic data and mix them with some amount of annotated real data in order to do Synthetic to Real semi-supervised domain adaptation. To the best of our knowledge, there is no previous work on unsupervised 3D pose estimation which is source-free and requires only images for adaptation. Additionally, there is also a dearth of work in 3D pose estimation which performs UDA for real-world nuisances like changes in texture, weather, etc. and in the presence of problems like occlusion. 3 METHODOLOGY An overview of our unsupervised domain adaptation method, 3DUDA can be found in Figure 2. After defining the notation and setup, we review our feature-level neural render-and-compare source model in Section 3.1 before describing our method in detail in Section 3.2. Notation For each object category $y$, we define three sets of parameters: a CNN backbone $\Phi_w$, a neural mesh $M$, and a clutter model $B$. We denote the neural feature representation of an input image $X$ as $\Phi_w(X) = F^a \in \mathbb{R}^{H \times W \times d}$. Where $a$ is the output of layer $a$ of a deep convolutional neural network backbone $\Phi_w$, with $d$ being the number of channels in layer $a$. $f^a_i \in \mathbb{R}^d$ is a feature vector in $F^a$ at position $i$ on the 2D lattice $P$ of the feature map. We drop the superscript $a$ in subsequent sections for notational simplicity. We represent our supervised source domain model with subscript $S$ and our unsupervised target domain model with subscript $T$. For more details, see A.1. 3.1 Source Model: Pose-Dependent Feature Level Render and Compare Our source model is similar to previous work like Wang et al. (2021a; 2023); Ma et al. (2022) on category-level pose estimation using neural feature level render and compare. These methods themselves are 3D extensions of feature generative models such as Kortylewski et al. (2020). Our source model defines a probabilistic generative model of normalized real-valued feature activations. $F$ conditioned on a 3D neural mesh representation $\mathcal{M}$. The neural mesh model aims to capture the 3D information of the foreground objects. For each object category $y$, the source model defines a neural mesh $\mathcal{M}_S$ as $\{\mathcal{V}, \mathcal{C}\}$, where $\mathcal{V} = \{V_r \in \mathbb{R}^3\}_{r=1}^{R}$ is the set of vertices of the mesh and $\mathcal{C} = \{C_r \in \mathbb{R}^c\}_{r=1}^{R}$ is the set of learnable features, i.e., neural features. $r$ denotes the index of the vertices. $R$ is the total number of vertices. We also define a clutter model $\mathcal{B} = \{\beta_n\}_{n=1}^{N}$ to describe the backgrounds. $N$ is a prefixed hyperparameter. For a given object pose or camera viewpoint $g$, we can render the neural mesh model $\mathcal{M}_S$ into a feature map using (differentiable) rasterization (Kato et al., 2020). We can compute the object likelihood of a target feature map $F \in \mathbb{R}^{H \times W \times D}$ as $$p(F|\mathcal{M}, g, \mathcal{B}) = \prod_{i \in \mathcal{F}G} p(f_i|\mathcal{M}, g) \prod_{i' \in \mathcal{B}G} p(f_{i'}|B),$$ where $\mathcal{F}G$ and $\mathcal{B}G$ denote the foreground and background pixels, respectively. $\mathcal{F}G$ is set of all the positions in the 2D lattice $P$ covered by the mesh $\mathcal{M}$ and $\mathcal{B}G$ are the positions that are not. We define $P(f_i|\mathcal{M}(V_r, C_r), g) = Z[\kappa_r] \exp\{\kappa_r c f_i \cdot n_{r,c}\}$ as a von Mises Fisher (vMF) distribution with mean $C_r$ and concentration parameter $\kappa_r$. For more details, please refer to the Appendix A.7.1. ### 3.2 3DUDA: UNSUPERVISED LEARNING OF 3D POSE USING NEURAL FEATURE SYNTHESIS AND SELECTIVE VERTEX FEATURE UPDATE Given only the source model $S$ (and no source data $X_S$) and some non-annotated target domain RGB images, we adapt $S$ to be able to perform well on the target domain. Figure 3 (NeMo column) shows examples of the performance of the source model in an OOD scenario. As expected, the estimated pose has diverged significantly from the ground truth pose in the target domain, indicating that the feature generative model parameterized by the neural mesh model $\mathcal{M}_S$ is no longer an adequate representative of the same object as a whole. However, a crucial observation, as seen in Figure 1, is that although the neural mesh model may not be a good global model for an object $y$, there is still a subset of robust vertices that corresponds to parts of objects that have undergone less changes in the new domain. Intuitively, this can be understood as some parts of an object changing less across domains. The number of such vertices and the threshold within which their shift is contained are functions of the domain nuisance variables and the object itself. This property is intuitively leveraged by humans regularly to adapt their previous knowledge to understand new, unseen objects. For example, a car that may undergo changes involving shape, context, and texture will still have parts such as wheels, headlights, and windshield that change less or none at all across these domain shifts. We leverage this observation to adapt the source model in an unsupervised manner. ### NEURAL FEATURE SYNTHESIS WITH MULTI POSE INITIALIZATION A fundamental benefit of Neural Mesh Models like our source model is that they are generative at the level of neural feature activations. This makes the overall reconstruction loss very smooth compared to related works that are generative on the pixel level (Wang et al., 2021a; 2023). Therefore, the source model can be optimized w.r.t. the pose parameters with standard stochastic gradient descent and contains one clear loss global optimum. This is no longer true in an OOD scenario. #### Neural Feature Rendering For inference with the source model, we can infer the 3D pose $g$ of the object $y$ by minimizing the negative log likelihood of the model. Specifically, we first extract the neural features of the image $F = \Phi_w(X)$ from the source CNN backbone. We define an initial pose $g_{init}$ using random initialization or by pre-rendering and comparing some pose samples. Using the initial pose, we render the neural mesh $\mathcal{M}$ into a feature map $F' \in \mathbb{R}^{H \times W \times D}$. The projected feature map is divided into $\mathcal{F}G$ and $\mathcal{B}G$, depending on which pixels in the feature map are covered by the projected mesh features. We compare the rendered feature map and the image feature map position-wise. Given that the feature vectors are normalized and considering a constant $\kappa$, the loss can be refactored as a simple reconstruction loss. The dot products are normalized accordingly. $$L_{rec} = 1 - \ln p(F|\mathcal{M}, g, \mathcal{B}) = 1 - (\sum_{i \in \mathcal{F}G} f_i \ast f'_i + \sum_{j \in \mathcal{B}G} f_j \ast \beta)$$ The pose $g_{init}$ is optimized by minimizing Equation 2 using stochastic gradient descent. Multi-Pose Initialization Figure 6 shows different render-and-compare optimized pose estimates by the source model that are optimized from different initial 3D pose in the target domain. This happens because optimization is stuck in different local optima in the OOD loss landscape. Therefore, instead of random pose initialization, we pre-render a uniform sampling of poses from the neural mesh model and compare them with the image features. 1 – 5 initial poses are chosen dependent how similar they are to the feature map and how far away they are from each other. We then optimize these initial poses using Equation 2, and may end up with multiple final rendered feature maps and estimated poses as shown in Figure 6. Even though the estimated object poses may be incorrect, we can still utilize these rendered maps for our Selective Vertex Feature Adaptation. 3.2.1 PROGRESSIVE SELECTIVE LOCAL VERTEX FEATURE ADAPTATION Local Vertex-Feature Similarity We define the similarity between an individual rendered neural mesh $\mathcal{M}$ vertex feature $C_r$ and its corresponding CNN feature $f_i$ (shown as $f_{i \rightarrow r}$) for a pose $g$ given a renderer $\mathcal{R}$ as a function of the parametric vMF score/likelihood (Du et al., 2022): $$L_{sim}(f_{i \rightarrow r}, C_r) = Z[\kappa_r] \exp(\kappa_r f_{i \rightarrow r}^T C_r), \quad \forall i \in \mathcal{F}_G, \quad C_r = \mathcal{R}(\mathcal{M}, g)$$ (3) Similar similarity measures have been shown to be robust OOD detectors in earlier works like (Du et al., 2022). We define a rejection criteria dependent on a threshold $\delta$ such that all individual feature vectors from multiple data samples that correspond to a specific vertex feature $C_r$ considered OOD if $L_{sim}(f_{i \rightarrow r}, C_r) < \delta_r$ and are not used to update these features. We can choose the threshold $\delta_r$ st. majority (90-95%) of source domain features lie within the likelihood score. Selective Vertex Adaptation (SVA) from Rendered Neural Features For a batch (size n) of target domain images $\mathcal{X}_{T,i}$, we obtain their neural features from the finetuned CNN backbone $\phi_w$, and we render neural feature maps $F_i$ from our mesh model $\mathcal{M}_S$ using differentiable render and compare from initial pose estimates $g_{init}$. We then spatially match the similarity of every rendered vertex feature with its corresponding image feature independently. For every vertex feature $C_r$, we average the corresponding image features $f_{i,a}$ at position $a$ in the 2D lattice which are above a threshold hyperparameter $\delta$ using Equation 3, we update a neural vertex feature $C_r$ as follows: $$C_{r,t+1} \leftarrow \alpha C_{r,t} + (1 - \alpha) \frac{1}{n} \sum_{i=1}^{n} f_{i,a} \quad \forall f_{i,a} \geq L_{sim}(C_r, f_{i,a}) > \delta_r$$ (4) where $C_{r,t}$ is the current vertex feature at timestep $t$ and $C_{r,t+1}$ is the updated vertex feature. $\alpha$ is a moving average hyperparameter. This can be done for the entire target domain adaptation data or in a batched manner. Subsequently, the estimated pose $g'$ is recalculated with the updated neural mesh model, and the CNN backbone is updated by gradient descent iteratively with the following loss; $$L = \sum_{r \in R_v} \log \frac{Z[\kappa_r] e^{\kappa_r f_{i \rightarrow r}^T C_r}}{\sum_{l \in R_v \cup N_r} Z[\kappa_l] e^{\kappa_l f_{i \rightarrow l}^T C_l} + \sum_{n=1}^{N} Z[\kappa_n] e^{\kappa_n f_{i \rightarrow n}^T \beta_n}},$$ (5) where $R_v$ denotes all visible vertices for the input image $\mathcal{X}$. $N_r$ denotes the vertices near $r$. We iteratively update subsets of vertex features and finetune the CNN backbone till convergence in a EM type manner. In practice, to avoid false positives and encourage better convergence, we establish a few conditions on our selective vertex feature adaptation process. To save computational overhead, we can fix $\kappa$ for the loss calculation. We fix a hyperparameter $\psi_n$ that controls the least number of local vertices detected to be similar (5 – 10% of visible vertices). We also drop samples with low global similarity values during the backbone update. $\kappa_r$ can also be recalculated in each time step $t$ using the updated $C_{r,t}$ for a more robust measurement of similarity. Unsupervised 3D pose estimation using SVA Figure 2 gives an intuition for our method, 3DUDA. It works as follows: (1) Extract neural features from source model CNN backbone $f_i = \phi_w(\mathcal{X}_T)$ (2) Pre-render feature maps from the source mesh ($\mathcal{M}_S$) (using vertex features $C_r$) and compare with image features (3) Choose top-3 similar rendered feature maps and calculate the optimized pose using gradient descent with reconstruction loss w.r.t. $f_i$ (Equation 2). (4) For an optimized pose, measure the similarity of every individual visible vertex feature with the corresponding vector in $f_i$ independently (Equation 3). (5) Update individual vertex features using average feature vector values for a batch of images (Equation 4). (6) Finetune CNN backbone using rendered feature maps obtained from Steps 1-3 using the updated vertex features (7) Continue steps 5 and 6 iteratively until convergence. (8) Post adaptation, evaluate 3D object pose of target test data using steps 1-3. 3.3 Theoretical Results Prior UDA works attempt to adapt in the target domain using a global pseudo-labeling setup. In global pseudo-labeling, a subset of images from the target domain are identified for which a majority (typically > RΩ for some Ω ∈ [0, 1]). We use Ω = 1 for analysis.) of the visible vertex features satisfy \( L_{\text{sim}}(f_{i \rightarrow r}, C_r) > \delta_r \). These images are used to fine-tune the model. However, depending on how different the target domain is from the source domain, a very small fraction of the target domain images (even none sometimes) usually satisfies this property in real datasets. This leads to negligible domain adaptation. However, upon careful inspection, we observed a robust subset of vertices in the neural meshes produced by the source-trained model. And as the features corresponding to these robust vertices are assumed independent, they can be used to fine-tune the model. In this section, we present conditions under which SVA simulates fine-tuning on a global pseudo-labeled dataset. This also reveals that the standard global pseudo-labeling setup is a special case of this formulation. Let the distribution of a rendered vertex feature \( C_r \) be denoted by \( P^r \). Let the joint distributions of these rendered vertex features in the source and target domain be denoted by \( P_S = \prod_r P^r_S \) and \( P_T = \prod_r P^r_T \) respectively. They are written as products of the individual marginal distributions because of the standard independence assumption between the vertices. Note that the distribution \( P_S \) is an approximation of the true underlying distribution \( P^*_S \). This approximate distribution is achieved by training the source model \( S \) on a finite i.i.d. sample \( X_S \) from \( P^*_S \) and their corresponding labels (ground-truth poses) \( Y_S \). Similarly, the i.i.d. sample \( X_T \) elicits an approximate distribution \( P_T \) of the true \( P^*_T \). Adapting for \( P_T \) is challenging because we do not have the corresponding \( Y_T \). **Definition 3.1 (Vertex K-partition)** A vertex K-partition is defined as a partition of the set of vertices (indexed by \( r \in \{1, 2, ..., R\} \)) into K non-empty mutually disjoint subsets (indexed by \( k \in \{1, 2, ..., K\} \)). Let the set of vertices in each partitioned subset be denoted by \( I_k \). A given vertex K-partition would split the joint distribution \( P_S \) into K independent joint distributions (denoted by \( P^{I_k}_S = \prod_{r \in I_k} P^r_S \)) such that \( P_S = \prod_k P^{I_k}_S \). The same extends for the corresponding target domain distributions. **Definition 3.2 (kδ-subset)** For a given sample \( X \) and vertex K-partition, a kδ-subset is defined as \( X^{k\delta} \subseteq X \) such that, \( L_{\text{sim}}(f_{i \rightarrow r}, C_r) > \delta_r \forall r \in I_k \). The corresponding approximation of \( P^{I_k} \) under \( X^{k\delta} \) is denoted by \( P^{I_k\delta} \). **Assumption 3.3 (Piece-wise Support Overlap)** There exists a vertex K-partition such that the kδ-subset of the target sample \( X_T \) satisfies, \[ |X^{k\delta}_T| \neq 0 \forall k \in \{1, 2, ..., K\} \] and as \( |X^{k\delta}_T| \to \infty \), \( \prod_k P^{I_k\delta}_T \to P^*_T \) This assumption requires the joint distribution of partitioned vertex subsets under \( X^{k\delta} \) to asymptotically approximate the corresponding true distributions. Intuitively, this translates to having enough support in the target domain such that samples satisfying the similarity constraint (Equation 3) in each kδ-subsets approximates the true target distribution of that vertex partition set. **Theorem 3.4** A target domain \( X_T \) satisfying assumption 3.3, elicits another target domain \( X_T^e \) such that each sample in \( X_T^e \) satisfies the global-pseudo labelling constraint \( L_{\text{sim}}(f_{i \rightarrow r}, C_r) > \delta_r \forall r \in \{1, 2, ..., R\} \). Asymptotically with the size of the domain, \( X_T^e \to X_T \). The proof of this theorem is by construction of the set \( X_T^e \) for any \( X_T \), and has been deferred to the appendix A.3. It is easy to see that a global-pseudo labelling setup is the special case of this formulation when the vertex K-partition in assumption 3.3 is the trivial partition with \( K = 1 \). It is also noteworthy that in the asymptotic case, even the global labelling setup would yield the same adaptability as SVA; but in practice, SVA yields much more data for adaptation than the former (see figure 5). Although the elicited target domain \( X_T^e \) does not represent the true target distribution precisely, it does yield more adaptability than global-pseudo labelling for a finite-sample and under assumption 3.3, asymptotically adapts to the true distribution (Figure 4). Figure 3: Qualitative Results of 3DUDA compared to ground truth and NeMo (Wang et al., 2021a). 3DUDA adapts to real world OOD target domains consisting of nuisances like weather and occlusion in an unsupervised manner and produces robust 3D object pose estimates. The CAD objects are for representation only and are taken from ShapeNet (Chang et al., 2015). 4 EXPERIMENTS Data We evaluate our model on OOD-CV (Zhao et al., 2023) and Imagenet-C (Hendrycks & Dietterich, 2019) corrupted Pascal3D+ dataset (Xiang et al., 2014). OOD-CV is a benchmark introduced to evaluate the robustness of the model in OOD scenarios. It includes OOD examples of 10 categories that cover unseen variations of nuisances including pose, shape, texture, context, and weather. The source model is trained on IID samples while the model is adapted and evaluated on OOD data for individual and combined nuisances. For Corrupted-Pascal3d+, we corrupt the adaptation and evaluation data with synthetic corruptions like shot noise, elastic deformation, fog, etc. from Imagenet-C. PASCAL3D+ dataset contains objects from 12 man-made categories, and each object is annotated with 3D pose, 2D centroid, and object distance. During adaptation and inference, only RGB images are provided. For a harder set-up of real-world nuisances combined with partial occlusion, we test our algorithm on Occluded-OOD-CV dataset which has been created in a manner similar to Wang et al. (2020) with 2 levels of object occlusion (L1(20−40%), L2(40−60%)). We also evaluate our model on two extreme UDA setups of (1) Real + Synthetic corruptions. We add Imagenet-C corruptions to the OOD-CV dataset and expect models to adapt from clean IID source data to these data in an unsupervised manner. (2) Real + Synthetic Corruption + Partial Occlusion. These are very difficult scenarios for UDA which, to our knowledge, have not been attempted before in a semi-supervised or unsupervised 3D pose estimation. Metrics 3D pose estimation aims to recover the 3D rotation parameterized by azimuth, elevation, and in-plane rotation of the viewing camera. We follow previous works like Zhou et al. (2018); Wang et al. (2021a); Ma et al. (2022) and evaluate the error between the predicted rotation matrix and the ground truth rotation matrix: $$\Delta(R_{pred}, R_{gt}) = \frac{\|logm(R_{pred}R_{gt}^{-1})\|_F}{\sqrt{2}}.$$ We report the accuracy of the pose estimation under common thresholds, $\frac{\pi}{6}$ and $\frac{\pi}{18}$ along with median error. Implementation Details An Imagenet pretrained Resnet50 is used as a feature extractor for our source model. The cuboid mesh is defined for each category and ensures that the majority of object area are covered by it. The source model is trained for 800 epochs with a batch size of 32 using an Adam optimizer in a fully supervised manner. For every adaptation step, we require a minimum batch size of 32 images for selective vertex and feature extractor update. We can also set the batch size for a step adaptively by requiring enough samples s.t. \( \approx 80\% \) of vertices can be updated. For a fixed \( \kappa \), our local vertex feature similarity threshold is .8. We train our model by switching between (in an EM fashion) selective vertex update and feature extractor training for about 100 epochs. Our adaptation model is implemented in PyTorch (with PyTorch3D for differential rasterization) and takes around 3 hours to train on 2 A5000 GPUs. **Baseline Models** We evaluate NeMo (Wang et al., 2021a), DMNT (Wang et al., 2023), SyntheticP3D (Yang et al., 2023) and standard Resnet-50. Wang et al. (2021a; 2023); Yang et al. (2023) are pose estimation methods which utilize similar feature level render and compare methodology as our source model and have been shown to be robust and efficient. (Res50-General) is a ResNet50 classifier that formulates the pose estimation task for all categories as one single classification task. Note that all models are evaluated on image only unsupervised learning setup. Although annotated target domain images are not provided to models, data augmentation or synthetic data is allowed for our baseline models. ### 4.1 Unsupervised 3D Pose Estimation: Results and Analysis **OOD-CV** Table 1 shows the Unsupervised 3D pose estimation results on OOD-CV (Zhao et al., 2023) dataset. This is our primary result, which shows our model efficacy in real-world scenarios. A qualitative comparison with Wang et al. (2021a) can be seen in Figure 3. All compared methods suffer equally in the real OOD scenarios. Surprisingly, the general Resnet50 model performs quite well relative to more complex models like NeMo and DMNT, suggesting that the additional category data is helpful in OOD scenarios. However, our method clearly outperforms all the models and is able to significantly bridge the domain gap. | Nuisance | Combined | shape | pose | texture | context | weather | |-------------------|----------|-------|------|---------|---------|---------| | Res50-General | 51.8 | 50.5 | 34.5 | 61.6 | 57.8 | 60.0 | | NeMo (Wang et al., 2021a) | 48.1 | 49.6 | 35.5 | 57.5 | 50.3 | 52.3 | | MaskRCNN (He et al., 2018) | 39.4 | 40.3 | 18.6 | 53.3 | 43.6 | 47.7 | | DMNT (Wang et al., 2023) | 50.0 | 51.5 | 38.0 | 56.8 | 52.4 | 54.5 | | P3D (Yang et al., 2023) | 48.2 | 52.3 | 45.8 | 51.0 | 54.6 | 44.5 | | **Ours** | **94.0** | **93.7** | **95.1** | **97.0** | **95.5** | **83.1** | | Nuisance | Combined | shape | pose | texture | context | weather | |-------------------|----------|-------|------|---------|---------|---------| | Res50-General | 18.1 | 15.7 | 12.6 | 22.3 | 15.5 | 23.4 | | NeMo (Wang et al., 2021a) | 21.7 | 19.3 | 7.1 | 33.6 | 21.5 | 30.3 | | MaskRCNN (He et al., 2018) | 15.3 | 15.6 | 1.6 | 24.3 | 13.8 | 22.9 | | DMNT (Wang et al., 2023) | 23.6 | 20.7 | 12.6 | 32.6 | 16.6 | 33.5 | | P3D (Yang et al., 2023) | 14.8 | 16.1 | 12.3 | 16.6 | 12.1 | 16.3 | | **Ours** | **87.8** | **82.1** | **69.5** | **92.6** | **89.3** | **90.7** | **Pascal3D→Corrupted-Pascal3D+** Table 2 shows the Unsupervised 3D pose estimation results on this setup for multiple corruptions. As expected, the drop in model performance is largely dependent on the type of corruption and its severity. Our method still performs significantly better when dealing with synthetic corruptions. **Occluded-OOD-CV** Table 3 (OccL1/L2) shows the Unsupervised 3D pose estimation results on Occluded-OOD-CV dataset at 2 levels of partial occlusion. This is a harder setup in which real nuisances are combined with occlusion. Our method is able to perform exceedingly well even in such a complex target domain with up to 67% improvement in accuracy. This is because our selective vertex adaptation focuses independently on adapting individual neural vertices (and the model) and is able to ignore occluded vertices for adaptation. Notably, Wang et al. (2021a) has been shown to be robust to occlusion but suffers when it is combined with real world nuisances. Table 2: Unsupervised 3D pose estimation results for Pascal3d+ → Corrupted-Pascal3D+ (Metrics: π/6 Accuracy (π/6), π/18 Accuracy (π/18), Median Error (Er)) | | Gaussian Noise | Shot Noise | Impulse Noise | Defocus Blur | |------------------|---------------|------------|---------------|--------------| | NeMo | 43.7 | 21.3 | 42.1 | 50.6 | | Ours | 84.3 | 59.1 | 9.8 | 85.9 | | | Glass Blur | Motion Blur | Zoom Blur | Snow | |------------------|---------------|------------|---------------|--------------| | NeMo | 56.7 | 27.0 | 33.8 | 69.7 | | Ours | 86.7 | 62.4 | 8.6 | 88.0 | | | Frost | Fog | Contrast | Elastic Transform | |------------------|---------------|------------|---------------|-------------------| | NeMo | 73.3 | 44.1 | 16.4 | 85.5 | | Ours | 86.3 | 62.5 | 8.6 | 88.7 | | | Pixelate | Speckle Noise | Gaussian Blur | Spatter | |------------------|---------------|---------------|---------------|-----------------| | NeMo | 77.5 | 53.0 | 13.0 | 67.9 | | Ours | 88.7 | 65.4 | 7.8 | 87.7 | NeMo (Wang et al., 2021a). Extreme UDA: Real+Synthetic Corruption Table 3 (OOD+SN/GB) show the results on the extreme UDA setup combining real and synthetic corruptions from the OOD-CV (Combined) and Imagenet-C datasets. We again see significant improvements compared to Wang et al. (2021a) which have been shown previously to be robust under this challenging setup. Table 3: Unsupervised 3D pose estimation results for Occlusion and Extreme UDA setup (a) OccL1/L2: Real Nuisance (OOD-CV (Combined)) + Occlusion (Level1/Level2) (b) OOD+SN/GB: Real Nuisance (OOD-CV) + Synthetic Noise (Speckle Noise/Glass Blur) (c) L1/L2+Spec: Real Nuisance (OOD-CV) + Occlusion (L1/L2) + Synthetic Noise (Speckle Noise) | | OccL1 | OccL2 | OOD+SN | OOD+GB | L1+Spec | L2+Spec | |------------------|-------|-------|--------|--------|---------|---------| | NeMo | 30.6 | 10.2 | 6.6 | 32.7 | 10.2 | 29.6 | | Ours | 84.6 | 77.1 | 78.7 | 70.4 | 80.5 | 63.0 | Extreme UDA: Real+Synthetic Corruption+Occlusion Table 3 (L1/L2+Spec) show the results on the extreme UDA setup combining real and synthetic corruptions along with partial occlusion from Occluded-OOD-CV and Imagenet-C datasets. This is an extremely challenging setup where three different kinds of nuisances/domain differences are combined and is reflected in NeMo’s results. Our model is still able to adapt to such a target domain, showing our method’s efficacy. Further experimental and ablation analysis is deferred to the Appendix due to limited space. 5 Conclusion, Limitations and Future Work In this work, we attempt to solve the previously unaddressed problem of unsupervised image-only source-free domain adaptation for 3D pose estimation. We focus our efforts on real world data with real world nuisances like weather, shape, texture, etc. and show that our method achieves significant success. Our method has limitations as it relies on the source model and cannot be trivially extended to articulated objects. It requires multiple pre-rendered samples for pose estimation. Like many other pose estimation methods, inference requires optimization using render-and-compare methodology for optimal pose estimation. In the future, we want to extend our method to unsupervised 6D pose estimation and to unseen object setups. Furthermore, the importance of the concentration parameter \( \kappa \) needs more research, as we believe that it is a crucial uncertainty marker that may be relevant in more difficult domain transfer settings. ACKNOWLEDGEMENTS This research has been supported by Army Research Laboratory award W911NF2320008 and ONR with N00014-21-1-2812. Adam Kortlewski acknowledges support via his Emmy Noether Research Group funded by the German Science Foundation (DFG) under Grant No. 468670075. REFERENCES Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. Shapenet: An information-rich 3d model repository, 2015. Dengsheng Chen, Jun Li, Zheng Wang, and Kai Xu. Learning canonical shape space for category-level 6d object pose and size estimation. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2020a. doi: 10.1109/cvpr42600.2020.01199. URL http://dx.doi.org/10.1109/cvpr42600.2020.01199. Kai Chen and Qi Dou. Sgpa: Structure-guided prior adaptation for category-level 6d object pose estimation. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 2753–2762, 2021. URL https://api.semanticscholar.org/CorpusID:244129110. Xu Chen, Zijian Dong, Jie Song, Andreas Geiger, and Otmar Hilliges. Category level object pose estimation via neural analysis-by-synthesis. Lecture Notes in Computer Science, pp. 139–156, 2020b. ISSN 1611-3349. doi: 10.1007/978-3-030-58574-7_9. URL http://dx.doi.org/10.1007/978-3-030-58574-7_9. Guoguang Du, Kai Wang, Shiguo Lian, and Kaiyong Zhao. Vision-based robotic grasping from object localization, object pose estimation to grasp estimation for parallel grippers: A review, 2019. Xuefeng Du, Gabriel Gozum, Yifei Ming, and Yixuan Li. Siren: Shaping representations for detecting out-of-distribution objects. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 20434–20449. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/804dbf8d3b8eee1ef875c6857efc64eb-Paper-Conference.pdf. Yang Fu and Xiaolong Wang. Category-level 6d object pose estimation in the wild: A semi-supervised learning approach and a new dataset, 2022. Walter Goodwin, Sagar Vaze, Ioannis Havoutis, and Ingmar Posner. Zero-shot category-level object pose estimation, 2022. Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn, 2018. Yisheng He, Wei Sun, Haibin Huang, Jianran Liu, Haoqiang Fan, and Jian Sun. Pvn3d: A deep point-wise 3d keypoints voting network for 6dof pose estimation. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2020. doi: 10.1109/cvpr42600.2020.01165. URL http://dx.doi.org/10.1109/CVPR42600.2020.01165. Yisheng He, Haibin Huang, Haoqiang Fan, Qifeng Chen, and Jian Sun. Ffb6d: A full flow bidirectional fusion network for 6d pose estimation. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2021. doi: 10.1109/cvpr46437.2021.00302. URL http://dx.doi.org/10.1109/CVPR46437.2021.00302. Yisheng He, Haoqiang Fan, Haibin Huang, Qifeng Chen, and Jian Sun. Towards self-supervised category-level object pose and size estimation, 2022. Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations, 2019.
uZfjFyPAvn
If authors think Shearlets may have any potential here, providing a discussion might be useful. Most of the approximation error for the pictures in Figure 6 appear to be at locations where the colors change in a small neighborhood. Could a shear matrix be potentially helpful in reducing the error because of its ability to extract anisotropic features?
Implicit Neural Representations and the Algebra of Complex Wavelets T. Mitchell Roddenberry, Vishwanath Saragadam∗, Maarten V. de Hoop, Richard G. Baraniuk Rice University Houston, TX, USA {mitch,mvd2,richb}@rice.edu, vishwanath.saragadam@ucr.edu Abstract Implicit neural representations (INRs) have arisen as useful methods for representing signals on Euclidean domains. By parameterizing an image as a multilayer perceptron (MLP) on Euclidean space, INRs effectively couple spatial and spectral features of the represented signal in a way that is not obvious in the usual discrete representation. Although INRs using sinusoidal activation functions have been studied in terms of Fourier theory, recent works have shown the advantage of using wavelets instead of sinusoids as activation functions, due to their ability to simultaneously localize in both frequency and space. In this work, we approach such INRs and demonstrate how they resolve high-frequency features of signals from coarse approximations performed in the first layer of the MLP. This leads to multiple prescriptions for the design of INR architectures, including the use of progressive wavelets, decoupling of low and high-pass approximations, and initialization schemes based on the singularities of the target signal. 1 Introduction Implicit neural representations (INRs) are a powerful set of neural architectures for representing and processing signals on low-dimensional spaces. By learning a continuous interpolant of a set of sampled points, INRs have enabled and advanced state-of-the-art methods in signal processing (Xu et al., 2022) and computer vision (Mildenhall et al., 2020). Typical INRs are specially designed multilayer perceptrons (MLPs), where the activation functions are chosen in such a way to yield a desirable signal representation. Although INRs often can be easily understood at the first layer due to the simplicity of plotting the function associated to each neuron based on its weights and biases, the behavior of the network in the second layer and beyond is more opaque, apart from some theoretical developments in the particular case of a sinusoidal first layer (Yüce et al., 2022). This work develops a broader theoretical understanding of INR architectures with a wider class of activation functions, followed by practical prescriptions rooted in time-frequency analysis. In particular, we 1. Characterize the function class of INRs in terms of Fourier convolutions of the neurons in the first layer (Lemma 1) 2. Demonstrate how INRs that use complex wavelet functions preserve useful properties of the wavelet, even after the application of the nonlinearities (Corollary 4) 3. Suggest a split architecture for approximating signals that decouples the smooth and nonsmooth parts into linear and nonlinear INRs, respectively (Section 4.3) 4. Leverage connections with wavelet theory to propose efficient initialization schemes for wavelet INRs based on the wavelet modulus maxima for capturing singularities in the target function (Section 5). ∗Now affiliated with UC Riverside. Following a brief survey of INR methods, the class of architectures we study is defined in Section 2. The main result bounding the function class represented by these architectures is stated in Section 3, which is then related to the algebra of complex wavelets in Section 4. The use of the wavelet modulus maxima for initialization of wavelet INRs is described and demonstrated in Section 5, before concluding in Section 6. ## Implicit Neural Representations Wavelets as activation functions in MLPs have been shown to yield good function approximators (Zhang & Benveniste, 1992; Marar et al., 1996). These works have leveraged the sparse representation of functions by wavelet dictionaries in order to construct simple neural architectures and training algorithms for effective signal representation. Indeed, an approximation of a signal by a finite linear combination of ridgelets (Candès, 1998) can be viewed as one such MLP using wavelet activation functions. Additionally, wavelets have been used to study the expressivity of deep neural networks and their approximation capacity for functions on manifolds (Shaham et al., 2018), for instance. Recently, sinusoidal activation functions in the first layer (Tancik et al., 2020) and beyond (Sitzmann et al., 2020; Fathony et al., 2020) have been shown to yield good function approximators, coupled with a harmonic analysis-type bound on the function class represented by these networks (Yüce et al., 2022). Similar to the Fourier embedding of the coordinate space that is done by methods such as SIREN (Sitzmann et al., 2020), eigenvectors of graph operators have been used to define INRs for signals on more general spaces (Grattarola & Vandergheynst, 2022). Other methods have used activation functions that, unlike sinusoids, are localized in space, such as gaussians (Ramasinghe & Lucey, 2021) or Gabor wavelets (Saragadam et al., 2023). Following the formulation of (Yüce et al., 2022), we define an INR to be a map \( f_\theta : \mathbb{R}^d \to \mathbb{C} \) defined in terms of a function \( \psi : \mathbb{R}^d \to \mathbb{C} \), followed by an MLP with analytic\(^1\) activation functions \( \rho^{(\ell)} : \mathbb{C} \to \mathbb{C} \) for layers \( \ell = 1, \ldots, L \): \[ z^{(0)}(r) = \psi(W^{(0)} r + b^{(0)}) \] \[ z^{(\ell)}(r) = \rho^{(\ell)}(W^{(\ell)} z^{(\ell-1)}(r) + b^{(\ell)}) \] \[ f_\theta(r) = W^{(L)} z^{(L-1)}(r) + b^{(L)}, \] where \( \theta \) denotes the set of parameters dictating the tensor \( W^{(0)} \in \mathbb{R}^{F_1 \times d \times d} \), matrices \( W^{(\ell)} \in \mathbb{C}^{F_{\ell+1} \times F_\ell} \) and \( b^{(0)} \in \mathbb{R}^{F_1 \times d} \) and vectors \( b^{(\ell)} \in \mathbb{C}^{F_{\ell+1}} \) for \( \ell = 1, \ldots, L \), with fixed integers \( F_\ell \) satisfying \( F_{L+1} = 1 \). The function \( \psi : \mathbb{R}^d \to \mathbb{C} \) is understood to act on \( W^{(0)} r + b^{(0)} \) row-wise, i.e., as a map \( \psi : \mathbb{R}^{F_1 \times d} \to \mathbb{C}^{F_1} \). We will henceforth refer to \( \psi \) as the template function of the INR. Owing to the use of Gabor wavelets by Saragadam et al. (2023), we will refer to functions of the form (1) as WIRE INRs, although (1) also captures architectures that do not use wavelets, such as SIREN (Sitzmann et al., 2020). ## Expressivity of INRs For the application of INRs to practical problems, it is important to understand the function class that an INR architecture can represent. We will demonstrate how the function parameterized by an INR can be understood via time-frequency analysis, ultimately motivating the use of wavelets as template functions. Noting that polynomials of sinusoids generate linear combinations of integer harmonics of said sinusoids, Yüce et al. (2022) bounded the expressivity of SIREN (Sitzmann et al., 2020) and related architectures (Fathony et al., 2020). These results essentially followed from identities relating products of trigonometric functions. For template functions that are not sinusoids, such as wavelets (Saragadam et al., 2023), these identities do not hold. The following result offers a bound on the class of functions represented by an INR. --- 1That is, entire on \( \mathbb{C} \). Figure 1: Fourier transforms of template functions $\psi$ and their powers $\psi^n$. **Lemma 1.** Let $f_\theta : \mathbb{R}^d \to \mathbb{C}$ be a WIRE INR. Assume that each of the activation functions $\rho(\ell)$ is a polynomial of degree at most $K$, and that the Fourier transform of the template function $\psi$ exists.\(^2\) Let $W^{(0)}r = [W_1 r, \ldots, W_{F_1} r]^T$ for $W_1, \ldots, W_{F_1} \in \mathbb{R}^{d \times d}$ each having full rank, and also let $b^{(0)} = [b_1, \ldots, b_{F_1}]^T$ for $b_1, \ldots, b_{F_1} \in \mathbb{R}^d$. For $k \geq 0$, denote by $\Delta(F_1, k)$ the set of ordered $F_1$-tuples of nonnegative integers $m = [m_1, \ldots, m_{F_1}]$ such that $\sum_{t=1}^{F_1} m_t = k$. Let a point $r_0 \in \mathbb{R}^d$ be given. Then, there exists an open neighborhood $U \ni r_0$ such that for all $\phi \in C^\infty_0(U)$ $$\hat{\phi} \cdot f_\theta(\xi) = \left( \hat{\phi} * \sum_{k=0}^{KL-1} \sum_{m \in \Delta(F_1, k)} \hat{\beta}_m \ast_{t=1}^{F_1} \left(e^{i2\pi(W_t^{-T}\xi, b_t)} \hat{\psi}(W_t^{-T}\xi)\right)^{m_t,\xi} \right)(\xi),$$ for coefficients $\hat{\beta}_m \in \mathbb{C}$ independent of the choice of $r_0 \in U$, where $(\cdot)^{m,\xi}$ denotes $m$-fold convolution\(^3\) of the argument with itself with respect to $\xi$, and $C^\infty_0(U)$ denotes the set of all infinitely differentiable functions with compact support contained in $U$. Furthermore, the coefficients $\hat{\beta}_m$ are only nonzero when each $t \in [1, \ldots, F_1]$ such that $m_t \neq 0$ also satisfies $W_tr_0 + b_t \in \text{supp}(\psi)$. The proof, a simple application of the convolution theorem, is left to Appendix A. Lemma 1 illustrates two things. First, the output of an INR has a Fourier transform determined by convolutions of the Fourier transforms of the atoms in the first layer with themselves, serving to generate “integer harmonics” of the initial atoms determined by scaled, shifted copies of the template function $\psi$. Notably, this recovers (Yüce et al., 2022, Theorem 1). Second, the support of these scaled and shifted atoms is preserved, so that the output at a given coordinate $r$ is dependent only upon the atoms in the first layer whose support contains $r$. **Remark 2.** The assumptions behind Lemma 1 can be relaxed to capture a broader class of architectures. By imposing continuity conditions on the template function $\psi$, the activation functions can be reasonably extended to analytic functions. These extensions are discussed in Appendix B. ### 4 THE ALGEBRA OF COMPLEX WAVELETS Of the INR architectures surveyed in Section 2, the only one to use a complex wavelet template function is WIRE (Saragadam et al., 2023), where a Gabor wavelet is used. Gabor wavelets are essentially band-pass filters and are necessarily complex-valued due to their lack of conjugate symmetry in the Fourier domain. We now consider the advantages of using complex wavelets, or more precisely progressive wavelets, as template functions for INRs by examining their structure as an algebra of functions. --- \(^2\)Even if only in the sense of tempered distributions. \(^3\)0-fold convolution is defined by convention to yield the Dirac delta. We also use the symbol $\ast$ to denote the convolution of several functions, in this case indexed by $t = 1, \ldots, F_1$. 4.1 Progressive Template Functions For the sake of discussion, suppose that \( d = 1 \), so that the INR represents a 1D function. The template function \( \psi : \mathbb{R} \to \mathbb{C} \) is said to be progressive\(^4\) if it has no negative frequency components, i.e., for \( \xi < 0 \), we have \( \hat{\psi}(\xi) = 0 \) (Mallat, 1999). No nonzero real-valued function is progressive. It is obvious that progressive functions remain progressive under scalar multiplication, shifts, and positive scaling. That is, for arbitrary \( s > 0, u \in \mathbb{R}, z \in \mathbb{C} \), if \( \psi(x) \) is progressive, then the function \( z \cdot D_s T_u \psi(x) := z \cdot \psi((x - u)/s) \) is also progressive. Moreover, progressive functions are closed under multiplication, so that if \( \psi_1 \) and \( \psi_2 \) are progressive, then \( \psi_3(x) := \psi_1(x)\psi_2(x) \) is also progressive,\(^5\) i.e., progressive functions constitute an algebra over \( \mathbb{C} \). **Example 1** (Complex Sinusoid). For any \( \omega > 0 \), the complex sinusoid \( \psi(x; \omega) = \exp(-i2\pi\omega x) \) is a progressive function, as its Fourier transform is a Dirac delta centered at \( \omega \). As pictured in Fig. 1 (a), the exponents \( \psi^n(\cdot; \omega) \) are themselves complex sinusoids, where \( \psi^n(x; \omega) = \psi(x; n\omega) \). **Example 2** (Gaussian). The gaussian function, defined for some \( \sigma > 0 \) as \( \psi(x; \sigma) = \exp(-x^2/(2\sigma^2)) \), is not a progressive function, as its Fourier transform is symmetric and centered about zero. Moreover, as pictured in Fig. 1 (b), the exponents are also gaussian functions \( \psi^n(x; \sigma) = \psi(x; \sigma/\sqrt{n}) \), which also have Fourier transform centered at zero. Unlike the complex sinusoid, the powers of the gaussian are all low-pass, but with increasingly wide passband. **Example 3** (Gabor Wavelet). For any \( \omega, \sigma > 0 \), the Gabor wavelet defined as \( \psi(x; \omega, \sigma) = \exp(-x^2/(2\sigma^2) - i2\pi\omega x) \) is not a progressive function, as its Fourier transform is a gaussian centered at \( \omega \) with standard deviation \( 1/\sigma \). However, the Fourier transform of the exponents \( \psi^n \) for integers \( n > 0 \) are gaussians centered at \( n\omega \) with standard deviation \( \sqrt{n}/\sigma \), as pictured in Fig. 1 (c). So, as \( n \) grows sufficiently large, the effective support of \( \psi^n \) will be contained in the positive reals, so that the Gabor wavelet can be considered as a progressive function for the purposes of studying INRs. A progressive function on \( \mathbb{R} \) has Fourier support contained in the nonnegative real numbers. Of course, there is not an obvious notion of nonnegativity that generalizes to \( \mathbb{R}^d \) for \( d > 1 \). Noting that the nonnegative reals form a convex conic subset of \( \mathbb{R} \), we define the notion of a progressive function with respect to some conic subset of \( \mathbb{R}^d \): **Definition 3.** Let \( \Gamma \subseteq \mathbb{R}^d \) be a convex conic set, i.e., for all \( \gamma_1, \gamma_2 \in \Gamma \) and \( a_1, a_2 \geq 0 \), we have that \( a_1\gamma_1 + a_2\gamma_2 \in \Gamma \).\(^6\) A function \( \psi : \mathbb{R}^d \to \mathbb{C} \) is said to be \( \Gamma \)-progressive if \( \text{supp}(\hat{\psi}) \subseteq \Gamma \). The function \( \psi \) is said to be locally \( \Gamma \)-progressive at \( r_0 \in \mathbb{R}^d \) if there exists some \( \Gamma \)-progressive function \( \psi_{r_0} : \mathbb{R}^d \to \mathbb{C} \) so that for all smooth functions \( \phi \in C^\infty_0(\mathbb{R}^d) \) with support in a sufficiently small neighborhood of \( r \), we have \[ \hat{\phi} \cdot \hat{\psi} = \hat{\phi} \ast \hat{\psi}_{r_0}. \] Curvelets (Candès & Donoho, 2004), for instance, are typically defined in a way to make them \( \Gamma \)-progressive for some conic set \( \Gamma \) that indicates the oscillatory direction of a curvelet atom. Observe that if \( \Gamma \) is a conic set, then for any matrix \( W \), the set \( W^\top \Gamma \) is also conic. Thus, for a function \( \psi \) that is \( \Gamma \)-progressive, the function \( \psi(Wx) \) is \( W^\top \Gamma \)-progressive. Observe further that for two \( \Gamma \)-progressive functions \( \psi_1, \psi_2 \), their product is also \( \Gamma \)-progressive. The closure of progressive functions under multiplication implies that an analytic function applied point-wise to a progressive function is progressive. For INRs as defined in (1), this yields the following corollary to Lemma 1. **Corollary 4.** Let \( \Gamma \subseteq \mathbb{R}^d \) be conic, and let \( \psi : \mathbb{R}^d \to \mathbb{C} \) be given, with Fourier support denoted \( \Gamma_0 := \text{supp}(\hat{\psi}) \). Let \( W^{(0)}r = [W_1r, \ldots, W_{F_1}r]^\top \) for \( W_1, \ldots, W_{F_1} \in \mathbb{R}^{d \times d} \) each having full rank. Assume that for each \( t = 1, \ldots, F_1 \), we have \( W_t^\top \Gamma_0 \subseteq \Gamma \). Then, the WIRE INR \( f_\theta : \mathbb{R}^d \to \mathbb{C} \) defined by (1) is a \( \Gamma \)-progressive function. Moreover, if we fix some \( r_0 \in \mathbb{R}^d \), and if the assumption \( W_t^\top \Gamma_0 \subseteq \Gamma \) holds for the indices \( t \) such that \( W_tr + b_t \in \text{supp}(\psi) \), then \( f_\theta \) is locally \( \Gamma \)-progressive at \( r_0 \). --- \(^4\)More commonly known as an analytic signal, we use this terminology (following Grossmann et al. (1990)) to avoid confusion with the analytic activation functions used in the INR. \(^5\)This is a simple consequence of the convolution theorem. \(^6\)We henceforth refer to such sets as simply “conic.” Figure 2: (a) A conic set $\Gamma_1 \subset \mathbb{R}^2$, and a weakly conic set $\Gamma_2 \subset \mathbb{R}^2$ (both truncated for illustration purposes). (b) Modulus of a function $g$ given by the sum of four template atoms. (c) Fourier transform of $f_\theta(r) = \rho(g(r))$, where $\rho(z) = -z + z^2 - z^3$. The blue and orange cones correspond to the respectively highlighted parts of the function $g$. Effective Fourier supports of the template atoms constituting $g$ are enclosed by rectangles, and approximate centers of Frequency support for each atom and product of atoms are marked by colored circles. The proof is left to Appendix C; essentially, the property of $\Gamma$-progressive functions constituting an algebra over $\mathbb{C}$ combined with the polynomial structure of the INR is shown to preserve the $\Gamma$-progressive property. Thus, any advantages/limitations of approximating functions using $\Gamma$-progressive template functions are maintained. Remark 5. One may notice that a $\Gamma$-progressive function will always incur a large error when approximating a real-valued function, as real-valued functions have conjugate symmetric Fourier transforms (apart from the case $\Gamma = \hat{\mathbb{R}}^d$). For fitting real-valued functions, it is effective to simply fit the real part of the INR output to the function, as taking the real part of a function symmetrizes it in the Fourier domain. In the particular case of $d = 1$, fitting the real part of a progressive INR to a function is equivalent to fitting the INR to that function’s Hilbert transform. 4.2 Band-pass Progressive Wavelets Corollary 4 holds for conic sets $\Gamma$, but is also true for a larger class of sets. If some set $\Gamma \subseteq \mathbb{R}^d$ is conic, it is by definition closed under all sums with nonnegative coefficients. Alternatively, consider the following weaker property: Definition 6. Let $\Gamma \subseteq \hat{\mathbb{R}}^d$. $\Gamma$ is said to be weakly conic if for all $\gamma_1, \gamma_2 \in \Gamma$ and $a_1, a_2 \geq 1$, we have that $a_1\gamma_1 + a_2\gamma_2 \in \Gamma$, and that $0 \in \Gamma$. A function $\psi : \mathbb{R}^d \to \mathbb{C}$ is said to be $\Gamma$-progressive if $\text{supp}(\hat{\psi}) \subseteq \Gamma$. The function $\psi$ is said to be locally $\Gamma$-progressive at $r_0 \in \mathbb{R}^d$ if there exists some $\Gamma$-progressive function $\psi_{r_0} : \mathbb{R}^d \to \mathbb{C}$ so that for all smooth functions $\phi \in C^\infty(\mathbb{R}^d)$ with support in a sufficiently small neighborhood of $r$, we have $$\hat{\phi} \cdot \hat{\psi} = \hat{\phi} * \hat{\psi}_{r_0}. \quad (4)$$ The notion of a weakly conic set is illustrated in Fig. 2 (a). Just as in the case of progressive functions for a conic set, the set of $\Gamma$-progressive functions for a weakly conic set $\Gamma \subseteq \hat{\mathbb{R}}^d$ constitutes an algebra over $\mathbb{C}$. One can check, then, that Corollary 4 holds for weakly conic sets as well. Putting this into context, consider a template function $\psi$ such that $\hat{\psi}$ vanishes in some neighborhood of the origin. Assume furthermore that $\text{supp}(\hat{\psi})$ is contained in some weakly conic set $\Gamma$. Example 4 (Complex Meyer Wavelet). The complex Meyer wavelet is most easily defined in terms of its Fourier transform. Define $$\hat{\psi}(\xi) := \begin{cases} \sin\left(\frac{3\xi}{8} - \pi/2\right) & \xi \in [2\pi/3, 4\pi/3] \\ \cos\left(\frac{3\xi}{8} - \pi/2\right) & \xi \in [4\pi/3, 8\pi/3] \\ 0 & \text{otherwise}. \end{cases}$$ Again, not necessarily compact. Figure 3: (a) Target signal, sampled at \( n = 512 \) uniformly spaced points in the interval \([-2, 2]\). Wavelet modulus maxima are marked in red. (b) Split complex wavelet INR with no hidden layers (MSE = 0.0016). (c) “Real” wavelet INR with no scaling network (MSE = 0.0096). (d) Complex wavelet INR with no scaling network (MSE = 0.0606). (e) Split “real” wavelet INR (MSE = 0.0061). (f) Split complex wavelet INR (MSE = 0.0011). The complex Meyer wavelet and its exponents are pictured in Fig. 1 (d). Observe that these functions are not only progressive, but are also \(\Gamma\)-progressive for the weakly conic set \(\Gamma = [2\pi/3, \infty)\). The Meyer scaling function, pictured by the dashed line in Fig. 1 (d), has Fourier support that only overlaps that of the complex Meyer wavelet, but none of its powers. Applying this extension of Corollary 4, we see that if the atoms in the first layer of an INR using such a function \(\psi\) have vanishing Fourier transform in some neighborhood of the origin, then the output of the INR has Fourier support that also vanishes in that neighborhood. We illustrate this in \(\mathbb{R}^2\) using a template function \(\psi : \mathbb{R}^2 \rightarrow \mathbb{C}\) where \(\psi\) is the tensor product of a gaussian and a complex Meyer wavelet. Using this template function, we construct an INR with \(F_1 = 4\) in the first layer, and a single polynomial activation function. The modulus of the sum of the template functions before applying the activation function is shown in Fig. 2 (b). We then plot the modulus of the Fourier transform of \(f_\theta\) in Fig. 2 (c). First, observe that since the effective supports of the transformed template functions are supported by two disjoint sets, the Fourier transform of \(f_\theta\) can be separated into two cones, each corresponding to a region in \(\mathbb{R}^2\). Second, since the complex Meyer wavelet vanishes in a neighborhood of the origin, these cones are weakly conic, so that the Fourier transform of \(f_\theta\) vanishes in a neighborhood of the origin as well, by Corollary 4 applied to weakly conic sets. Remark 7. The weakly conic sets pictured in Fig. 2 (c) are only approximation bounds of the true Fourier support of the constituent atoms. We see that Corollary 4 still holds in an approximate sense, as the bulk of the Fourier support of the atoms is contained in each of the pictured cones. 4.3 A Split Architecture for INRs Based on this property of INRs preserving the band-pass properties of progressive template functions, it is well-motivated to approximate functions using a sum of two INRs: one to handle the low-pass components using a scaling function, and the other to handle the high-pass components using a wavelet. We illustrate this in Fig. 3, where we fit to a classic test signal on \(\mathbb{R}\) (Donoho & Johnstone, 1994), pictured in Fig. 3 (a). The first INR uses a gaussian template function \(\psi(x) = \exp(-(\pi x)^2/6)\) with \(L = 1\), and the constraint that the weights \(W^{(0)}\) are all equal to one, i.e., the template atoms only vary in their abscissa. Such a network is essentially a single-layer perceptron (Zhang & Benveniste, 1992) for representing smooth signals. We refer to this network as the “scaling network.” The second INR uses a Gabor template function \( \psi(x) = \exp(-\pi x^2/6) \exp(-i2\pi x) \) with \( L = 3 \), where we initialize the weights in the first layer to be positive, thus satisfying the condition of \( W_t^\top \Gamma \subseteq \Gamma \) in Corollary 4 for \( \Gamma = \mathbb{R}^+ \). Although \( \psi \) is not progressive, its Fourier transform has fast decay, so we consider it to be essentially progressive, and thus approximately fulfilling the conditions of Corollary 4. We refer to this network as the “wavelet network,” as it is the WIRE architecture (Saragadam et al., 2023) for signals on \( \mathbb{R} \). Denoting the scaling and wavelet networks by \( f_{\theta,s}, f_{\theta,w} : \mathbb{R} \to \mathbb{C} \), respectively, we consider their sum \( f_\theta = f_{\theta,s} + f_{\theta,w} \) as a model, and approximate the target signal by taking \( \text{Re}\{f_\theta\} : \mathbb{R} \to \mathbb{R} \). The reason for modeling a signal as the sum of a linear scaling INR and a nonlinear INR with a progressive wavelet is apparent in Fig. 1 (d), where the scaling function and powers of a complex Meyer wavelet are pictured. Observe that the portions of the Fourier spectrum covered by the Gaussian scaling function and the high powers of the Gabor wavelet (as in an INR, by Lemma 1) are essentially disjoint. To approximate the low-frequency components of a signal using an INR with a progressive wavelet would require large dilations of the atoms in the first layer, in order to force the center frequency of the wavelet towards zero. Moreover, if the progressive wavelet has a Fourier transform that vanishes in a neighborhood of the origin, such dilations will never properly cover a small neighborhood of the origin. This does not arise when using real-valued template functions, as powers of such functions generate low-frequency components. However, this phenomena is not always desirable, as the low-frequency and high-frequency parts of a signal become highly correlated in this regime, a property that is not necessarily true for natural signals. The idea behind the “split” architecture, then, is to use a simple network to approximate the smooth parts of the target signal, and then a more complicated nonlinear network to approximate the nonsmooth parts of the signal. We fit an array of INR architectures to the target signal: a split INR architecture with complex Gabor wavelets but no hidden layers (\( L = 1 \)) in the MLP, one with no scaling network and real wavelets, one with no scaling network and complex wavelets, one with a scaling network and real wavelets, and finally our proposed split architecture with complex Gabor wavelets and hidden layers in the MLP. All wavelet networks apart from the first one have two hidden layers (\( L = 3 \)). In the architectures that use real wavelets, we use a gaussian multiplied by a sinusoid, rather than a complex exponential as in the complex Gabor wavelet. The results of training these architectures are respectively shown in Fig. 3 (b-f). Hyperparameters for each architecture are described in Appendix E.1. We observe slightly better performance, measured in mean squared error (MSE), from the proposed architecture than the split network with no hidden layers, which in turn outperforms the split architecture using real wavelets. This illustrates both the advantages of using complex wavelets over real wavelets, due to their ability to decouple low and high-frequency parts of a signal, as well as the advantage of the hidden layers that couple wavelet coefficients across scales, as stated by Lemma 1. Indeed, the nonlinear activation functions generate “wavelets” from the template function atoms, rather than needing to form a long list of wavelet atoms and their abscissa in a way that does not capture how wavelet coefficients near singularities are correlated across scales. The two INR architectures without scaling networks fared the worst in this experiment; however, we note that the INR using real wavelets outperformed the one using complex wavelets. This is because powers of real-valued wavelets can generate low-frequency signal content. To see the role of the nonlinearities in the wavelet INR, we freeze the weights and biases in the first layer of the split complex wavelet INR, and take an optimal linear combination of the resulting template atoms to fit the signal, thus yielding an INR with no hidden layers (Zhang & Benveniste, 1992). We compare the Fourier transforms of the original wavelet network to this “linearized” one in Fig. 4 (b-c), where we see that the nonlinear wavelet network is able to resolve much more high-frequency signal content than the linear one. This reflects how the activation functions resolve high-frequency features from low-frequency approximations, as illustrated initially in Fig. 2. Moreover, the fact that both wavelet networks have the bulk of their Fourier support only on positive frequencies... Figure 4: (a) Split complex wavelet INR, separated into scaling and wavelet networks. (b) Fourier transform of scaling and wavelet networks. (c) Linearized complex wavelet INR. (d) Fourier transform of linearized wavelet network. illustrates the how the algebraic closure of progressive wavelets under multiplication applies to INR architectures, as in Corollary 4. 5 RESOLUTION OF SINGULARITIES A useful model for studying sparse representations of images is the cartoon-like image, which is a smooth function on $\mathbb{R}^2$ apart from singularities along a twice-differentiable curve (Candès & Donoho, 2004; Wakin et al., 2006). The smooth part of an image can be handled by the scaling function associated to a wavelet transform, while the singular parts are best captured by the wavelet function. In the context of the proposed split INR architecture, the scaling INR yields a smooth approximation to the signal, and the wavelet INR resolves the remaining singularities. Inspired by this, we now consider how the wavelet INR can be initialized with the resolution of isolated singularities in mind. 5.1 INITIALIZATION WITH THE WAVELET MODULUS MAXIMA As demonstrated by Lemma 1, the function $\psi$ in the first layer of an INR determines the expressivity of the network. Many such networks satisfy a universal approximation property (Zhang & Benveniste, 1992), but their value in practice comes from their implicit bias (Yüce et al., 2022; Saragadam et al., 2023) in representing a particular class of functions. For instance, using a wavelet in the first layer results in sharp resolution of edges with spatially compact error (Saragadam et al., 2023). In the remainder of this section, we demonstrate how an understanding of singular points in terms of the wavelet transform can be used to bolster INR architectures and initialization schemes. Roughly speaking, isolated singularities in a signal are points where the signal is nonsmooth, but is smooth in a punctured neighborhood around that point. Such singularities generate “wavelet modulus maxima” (WMM) curves in the continuous wavelet transform (Mallat, 1999), which have slow decay in the Fourier domain. With Lemma 1 in mind, we see that INRs can use a collection of low-frequency template atoms and generate a collection of coupled high-frequency atoms, while also preserving the spatial locality of the template atoms. The combination of these insights suggests a method for the initialization of INRs. In particular, for a given number of template atoms $F_1$ in an INR, the network weights $W^{(0)}$ and abscissa $b^{(0)}$ should be initialized in a way that facilitates effective training of the INR via optimization methods. We empirically demonstrate the difference in performance for INRs initialized at random and INRs initialized in accordance with the singularities in the target signal. Once again, we fit the sum of a scaling network and a wavelet network to the target signal in Fig. 3. In Fig. 5 (a), we plot the mean squared error (MSE) for this setting after 1000 training steps for both randomly initialized and strategically initialized INRs, for $F_1 = K m$, where $K \in \{1, 3, 5, 7\}$ and $m$ is the number of Figure 5: (a) Mean and standard deviation of the MSE over 10 trials on the 1D test signal. (b) PSNR of different architectures and initialization schemes on the Kodak dataset (kod, 1999), for image fitting and denoising tasks. In denoising task, noisy images had average PSNR of 17.35 dB. WMM points as determined by an estimate of the continuous wavelet transform of the target signal. The randomly initialized INRs have abscissa distributed uniformly at random over the domain of the signal. The strategically initialized INRs place $K$ template atoms at each WMM point (so, a deterministic set of abscissa points). Both initialization schemes randomly distribute the scale weights uniformly in the interval $[1, K]$. We observe that for all $K$, the MSE of the strategically initialized INR is approximately an order of magnitude less than that of the randomly initialized INR. When $d = 2$, e.g., images, the WMM can be approximated by the gradients of the target signal to obtain an initial set of weights and biases for the wavelet INR. We evaluate this empirically on the Kodak Lossless True Color Image Suite (kod, 1999). We approximate the target images using the proposed split INR architecture. For the WMM-based initialization, we apply a Canny edge detector (Canny, 1986) to encode the positions and directions of the edges. Further details can be found in Appendix D. The architectures used are described in Appendix E.2. We record the results of this experiment for a variety of template functions and initialization schemes in Fig. 5 (b). Experiments are done for two tasks: image representation, where the network is fit to the ground truth image, and denoising, where the network is fit to a noisy image and the error relative to the ground truth is measured. The latter experiment demonstrates how the implicit bias of INRs is such that they fit natural images better than they fit noise. We observe that over the whole dataset, initializing the network using the WMM yields a higher PSNR for both tasks, demonstrating the utility of smart initialization of INRs. Results are aggregated over the whole dataset; see Appendix F for results on the individual images. 6 CONCLUSIONS We have offered a time-frequency analysis of INRs that leverages polynomial approximations of the nonlinear behavior of MLPs beyond the first layer. By noting that progressive functions form an algebra over the complex numbers, we demonstrated that this analysis yields insights into the behavior of INRs using complex wavelets, such as WIRE (Saragadam et al., 2023). This leads to a split architecture for approximating signals, which decouples the low-pass and high-pass parts of a signal using two INRs, roughly corresponding to the scaling and wavelet functions of a wavelet transform. Furthermore, the connection with the theory of wavelets yields a natural initialization scheme for the weights of an INR based on the singularities of a signal. INR architectures built using wavelet activation functions offer useful advantages for function approximation that balance locality in space and frequency. The structure of complex wavelets as an algebra of functions with conic Fourier support, combined with the application of INRs for interpolating sampled functions, suggests a connection with microlocal and semiclassical analysis (Monard & Stefanov, 2023). This could potentially be understood and improved by incorporating ideas from shearlet and curvelet systems (Labate et al., 2005; Candès & Donoho, 2004). We also foresee the decoupling of the smooth and singular parts of a signal by the split INR architecture having useful properties for solving inverse problems. ACKNOWLEDGEMENTS This work was supported by NSF grants CCF-1911094, IIS-1838177, and IIS-1730574; ONR grants N00014-18-1-2571, N00014-20-1-2534, and MURI N00014-20-1-2787; AFOSR grant FA9550-22-1-0060; and a Vannevar Bush Faculty Fellowship, ONR grant N00014-18-1-2047. Maarten de Hoop gratefully acknowledges support from the Department of Energy under grant DE-SC0020345, the Simons Foundation under the MATH+X program, and the corporate members of the Geo-Mathematical Imaging Group at Rice University. REFERENCES Kodak lossless true color image suite. http://r0k.us/graphics/kodak/, 1999. Accessed: 2022-11-09. Emmanuel Jean Candès. Ridgelets: theory and applications. PhD thesis, Stanford University, 1998. Emmanuel Jean Candès and David Leigh Donoho. New tight frames of curvelets and optimal representations of objects with piecewise $C^2$ singularities. Communications on Pure and Applied Mathematics, 57(2):219–266, 2004. John Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, (6):679–698, 1986. David Leigh Donoho and Iain M Johnstone. Ideal spatial adaptation by wavelet shrinkage. Biometrika, 81(3):425–455, 1994. Rizal Fathony, Anit Kumar Sahu, Devin Willmott, and J Zico Kolter. Multiplicative filter networks. In International Conference on Learning Representations, 2020. Daniele Grattarola and Pierre Vandergheynst. Generalised implicit neural representations. Advances in Neural Information Processing Systems, 35:30446–30458, 2022. Alexandre Grossmann, Richard Kronland-Martinet, and J Morlet. Reading and understanding continuous wavelet transforms. In Wavelets: Time-Frequency Methods and Phase Space Proceedings of the International Conference, Marseille, France, December 14–18, 1987, pp. 2–20. Springer, 1990. Demetrio Labate, Wang-Q Lim, Gitta Kutyniok, and Guido Weiss. Sparse multidimensional representation using shearlets. In Wavelets XI, pp. 254–262. SPIE, 2005. Stéphane Mallat. A wavelet tour of signal processing. Elsevier, 1999. Joao Fernando Marar, Edson CB Carvalho Filho, and Germano C Vasconcelos. Function approximation by polynomial wavelets generated from powers of sigmoids. In Wavelet Applications III, volume 2762, pp. 365–374. SPIE, 1996. Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. NeRF: Representing scenes as neural radiance fields for view synthesis. In IEEE European Conference on Computer Vision, 2020. François Monard and Plamen Stefanov. Sampling the X-ray transform on simple surfaces. SIAM Journal on Mathematical Analysis, 55(3):1707–1736, 2023. Sameera Ramasinghe and Simon Lucey. Beyond periodicity: Towards a unifying framework for activations in coordinate-MLPs. In IEEE European Conference on Computer Vision, 2021. Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=ryQu7f-RZ. Vishwanath Saragadam, Daniel LeJeune, Jasper Tan, Guha Balakrishnan, Ashok Veeraraghavan, and Richard G Baraniuk. WIRE: Wavelet implicit neural representations. In IEEE/CVF Computer Vision and Pattern Recognition Conference, pp. 18507–18516, 2023.
wPhbtwlCDa
would be great to provide (even 1-sentence) intuition for EPIC (explain the various normalization terms that EPIC paper explains), because it is crucial for understanding your definition of canonicalization function
STARC: A GENERAL FRAMEWORK FOR QUANTIFYING DIFFERENCES BETWEEN REWARD FUNCTIONS Joar Skalse Department of Computer Science Future of Humanity Institute Oxford University joar.skalse@cs.ox.ac.uk Lucy Farnik University of Bristol Bristol AI Safety Centre lucy.farnik@bristol.ac.uk Sumeet Ramesh Motwani Berkeley Artificial Intelligence Research University of California, Berkeley motwani@berkeley.edu Erik Jenner Berkeley Artificial Intelligence Research University of California, Berkeley jenner@berkeley.edu Adam Gleave FAR AI, Inc. adam@far.ai Alessandro Abate Department of Computer Science Oxford University aabate@cs.ox.ac.uk ABSTRACT In order to solve a task using reinforcement learning, it is necessary to first formalise the goal of that task as a reward function. However, for many real-world tasks, it is very difficult to manually specify a reward function that never incentivises undesirable behaviour. As a result, it is increasingly popular to use reward learning algorithms, which attempt to learn a reward function from data. However, the theoretical foundations of reward learning are not yet well-developed. In particular, it is typically not known when a given reward learning algorithm with high probability will learn a reward function that is safe to optimise. This means that reward learning algorithms generally must be evaluated empirically, which is expensive, and that their failure modes are difficult to anticipate in advance. One of the roadblocks to deriving better theoretical guarantees is the lack of good methods for quantifying the difference between reward functions. In this paper we provide a solution to this problem, in the form of a class of pseudometrics on the space of all reward functions that we call STARC (STAndardised Reward Comparison) metrics. We show that STARC metrics induce both an upper and a lower bound on worst-case regret, which implies that our metrics are tight, and that any metric with the same properties must be bilipschitz equivalent to ours. Moreover, we also identify a number of issues with reward metrics proposed by earlier works. Finally, we evaluate our metrics empirically, to demonstrate their practical efficacy. STARC metrics can be used to make both theoretical and empirical analysis of reward learning algorithms both easier and more principled. 1 INTRODUCTION To solve a sequential decision-making task with reinforcement learning or automated planning, we must first formalise that task using a reward function (Sutton & Barto, 2018; Russell & Norvig, 2020). However, for many tasks, it is extremely difficult to manually specify a reward function that captures the task in the intended way. To resolve this issue, it is increasingly popular to use reward learning, which attempts to learn a reward function from data. There are many techniques for doing this. For example, it is possible to use preferences between trajectories (e.g., Christiano et al., 2017), expert demonstrations (e.g., Ng & Russell, 2000), or a combination of the two (e.g., Ibarz et al., 2018). To evaluate a reward learning method, we must quantify the difference between the learnt reward function and the underlying true reward function. However, doing this is far from straightforward. A simple method might be to measure their $L_2$-distance. However, this is unsatisfactory, because two reward functions can have a large $L_2$-distance, even if they induce the same ordering of policies, or a small $L_2$-distance, even if they induce the opposite ordering of policies.\footnote{For example, given an arbitrary reward function $R$ and an arbitrary constant $c$, we have that $R$ and $c \cdot R$ have the same ordering of policies, even though their $L_2$-distance may be arbitrarily large. Similarly, for any $\epsilon$, we have that $\epsilon \cdot R$ and $-\epsilon \cdot R$ have the opposite ordering of policies, unless $R$ is constant, even though their $L_2$-distance may be arbitrarily small.} Another option is to evaluate the learnt reward function on a test set. However, this is also unsatisfactory, because it can only guarantee that the learnt reward function is accurate on a given data distribution, and when the reward function is optimised we necessarily incur a distributional shift (after which the learnt reward function may no longer match the true reward function). Yet another option is to optimise the learnt reward function, and evaluate the obtained policy according to the true reward function. However, this is also unsatisfactory, both because it is very expensive, and because it makes it difficult to separate issues with the policy optimisation process from issues with the reward learning algorithm. Moreover, because this method is purely empirical, it cannot be used for theoretical work. These issues make it challenging to evaluate reward learning algorithms in a way that is principled and robust. This in turn makes it difficult to anticipate in what situations a reward learning algorithm might fail, or what their failure modes might look like. It also makes it difficult to compare different reward learning algorithms against each other, without getting results that may be heavily dependent on the experimental setup. These issues limit the applicability of reward learning in practice. In this paper, we introduce STAndardised Reward Comparison (STARC) metrics, which is a family of pseudometrics that quantify the difference between reward functions in a principled way. Moreover, we demonstrate that STARC metrics enjoy strong theoretical guarantees. In particular, we show that STARC metrics induce an upper bound on the worst-case regret that can be induced under arbitrary policy optimisation, which means that a small STARC distance guarantees that two reward functions behave in a similar way. Moreover, we also demonstrate that STARC metrics induce a lower bound on worst-case regret. This has the important consequence that any reward function distance metric which induces both an upper and a lower bound on worst-case regret must be bilipschitz equivalent to STARC metrics, which in turn means that they (in a certain sense) are unique. In particular, we should not expect to be able to improve on them in any substantial way. In addition to this, we also evaluate STARC metrics experimentally, and demonstrate that their theoretical guarantees translate into compelling empirical performance. STARC metrics are cheap to compute, which means that they can be used for empirical evaluation of reward learning algorithms. Moreover, they can be calculated from a closed-form expression, which means that they are also suitable for use in theoretical analysis. As such, STARC metrics enable us to evaluate reward learning methods in a way that is both easier and more theoretically principled than relevant alternatives. Our work thus contributes towards building a more rigorous foundation for the field of reward learning. 1.1 Related Work There are two existing papers that study the problem of how to quantify the difference between reward functions. The first is Gleave et al. (2020), which proposes a distance metric that they call Equivalent-Policy Invariant Comparison (EPIC). They show that the EPIC-distance between two reward functions induces a regret bound for optimal policies. The second paper is Wulfe et al. (2022), which proposes a distance metric that they call Dynamics-Aware Reward Distance (DARD). Unlike EPIC, DARD incorporates information about the transition dynamics of the environment. This means that DARD might give a tighter measurement, in situations where the transition dynamics are known. Unlike Gleave et al. (2020), they do not derive any regret bound for DARD. Our work extends the work by Gleave et al. (2020) and Wulfe et al. (2022) in several important ways. First of all, Wulfe et al. (2022) do not provide any regret bounds, which is unsatisfactory for theoretical work, and the upper regret bound that is provided by Gleave et al. (2020) is both weaker and less general than ours. In particular, their bound only considers optimal policies, whereas our bound covers all pairs of policies (with optimal policies being a special case). Moreover, we also argue that Gleave et al. (2020) have chosen to quantify regret in a way that fails to capture what we care about in practice. In Appendix A, we provide an extensive theoretical analysis of EPIC, and show that it lacks many of the important theoretical guarantees enjoyed by STARC metrics. In particular, we demonstrate that EPIC fails to induce either an upper or lower bound on worst-case regret (as we define it). We also include an extensive discussion and criticism of DARD in Appendix B. Moreover, in Section 4, we provide experimental data that shows that STARC metrics in practice can have a much tighter correlation with worst-case regret than both EPIC and DARD. This means that STARC metrics both can attain better empirical performance and give stronger theoretical guarantees than the pseudometrics proposed by earlier work. It is important to note that EPIC is designed to be independent of the environment dynamics, whereas both STARC and DARD depend on the transition dynamics. This issue is discussed in Section 2.3. The question of what happens if one reward function is optimised instead of a different reward function is considered by many previous works. A notable example is Ng et al. (1999), which shows that if two reward functions differ by a type of transformation they call potential shaping, then they have the same optimal policies in all environments. Potential shaping is also studied by e.g. Jenner et al. (2022). Another example is Skalse et al. (2022b), which shows that if two reward functions \( R_1, R_2 \) have the property that there are no policies \( \pi_1, \pi_2 \) such that \( J_1(\pi_1) > J_1(\pi_2) \) and \( J_2(\pi_1) < J_2(\pi_2) \), then either \( R_1 \) and \( R_2 \) induce the same ordering of policies, or at least one of them assigns the same reward to all policies. Zhuang & Hadfield-Menell (2021) consider proxy rewards that depend on a strict subset of the features which are relevant to the true reward, and then show that optimising such a proxy in some cases may be arbitrarily bad, given certain assumptions. Skalse et al. (2022a) derive necessary and sufficient conditions for when two reward functions are equivalent, for the purposes of computing certain policies or other mathematical objects. Also relevant is Everitt et al. (2017), which studies the related problem of reward corruption, and Pan et al. (2022), which considers natural choices of proxy rewards for several environments. Unlike these works, we are interested in the question of quantifying the difference between reward functions. ### 1.2 Preliminaries A Markov Decision Processes (MDP) is a tuple \((S, A, \tau, \mu_0, R, \gamma)\) where \( S \) is a set of states, \( A \) is a set of actions, \( \tau : S \times A \rightarrow \Delta(S) \) is a transition function, \( \mu_0 \in \Delta(S) \) is an initial state distribution, \( R : S \times A \times S \rightarrow \mathbb{R} \) is a reward function, and \( \gamma \in (0, 1) \) is a discount rate. A policy is a function \( \pi : S \rightarrow \Delta(A) \). A trajectory \( \xi = \langle s_0, a_0, s_1, a_1, \ldots \rangle \) is a possible path in an MDP. The return function \( G \) gives the cumulative discounted reward of a trajectory, \( G(\xi) = \sum_{t=0}^{\infty} \gamma^t R(s_t, a_t, s_{t+1}) \), and the evaluation function \( J \) gives the expected trajectory return given a policy, \( J(\pi) = \mathbb{E}_{\xi \sim \pi}[G(\xi)] \). A policy maximising \( J \) is an optimal policy. The value function \( V^\pi : S \rightarrow \mathbb{R} \) of a policy encodes the expected future discounted reward from each state when following that policy. We use \( R_i \) to refer to the set of all reward functions. When talking about multiple rewards, we give each reward a subscript \( R_i \), and use \( J_i, G_i, \) and \( V_i^\pi \), to denote \( R_i \)'s evaluation function, return function, and \( \pi \)-value function. In this paper, we assume that all states are reachable under \( \tau \) and \( \mu_0 \). Note that if this is not the case, then all unreachable states can simply be removed from \( S \). Our theoretical results also assume that \( S \) and \( A \) are finite. However, STARC metrics can still be computed in continuous environments. Given a set \( X \), a function \( d : X \times X \rightarrow \mathbb{R} \) is called a pseudometric if \( d(x_1, x_1) = 0, d(x_1, x_2) \geq 0, d(x_1, x_2) = d(x_2, x_1), \) and \( d(x_1, x_3) \leq d(x_1, x_2) + d(x_2, x_3) \), for all \( x_1, x_2, x_3 \in X \). Given two pseudometrics \( d_1, d_2 \) on \( X \), if there are constants \( \ell, u \) such that \( \ell \cdot d_1(x_1, x_2) \leq d_2(x_1, x_2) \leq u \cdot d_1(x_1, x_2) \) for all \( x_1, x_2 \in X \), then \( d_1 \) and \( d_2 \) are bilipschitz equivalent. Given a vector space \( V \), a function \( n : V \rightarrow \mathbb{R} \) is a norm if \( n(v_1) \geq 0, n(v_1) = 0 \iff v_1 = 0, n(c \cdot v_1) = |c| \cdot n(v_1), \) and \( n(v_1 - v_2) \leq n(v_1) + n(v_2) \) for all \( v_1, v_2 \in V, c \in \mathbb{R} \). Given a norm \( n \), we can define a (pseudo)metric \( m \) as \( m(x, y) = n(|x - y|) \). In a mild abuse of notation, we will often denote this metric using \( n \) directly, so that \( n(x, y) = n(|x - y|) \). For any \( p \in \mathbb{N} \), \( L_p \) is the norm given by \( L_p(v) = (\sum |v_i|^p)^{1/p} \). A norm \( n \) is a weighted version of \( n' \) if \( n = n' \circ M \) for a diagonal matrix \( M \). We will use potential shaping, which was first introduced by Ng et al. (1999). First, a potential function is a function \( \Phi : S \rightarrow \mathbb{R} \). Given a discount \( \gamma \), we say that \( R_1 \) and \( R_2 \) differ by potential shaping if for some potential \( \Phi \), we have that \( R_2(s, a, s') = R_1(s, a, s') + \gamma \cdot \Phi(s') - \Phi(s) \). We also use \( S' \)-redistribution (as defined by Skalse et al. (2022a)). Given a transition function \( \tau \), we say that \( R_1 \) and \( R_2 \) differ by \( S' \)-redistribution if \( \mathbb{E}_{S' \sim \tau(s,a)}[R_2(s, a, S')] = \mathbb{E}_{S' \sim \tau(s,a)}[R_1(s, a, S')] \). Finally, we say that \( R_1 \) and \( R_2 \) differ by positive linear scaling if \( R_2(s, a, s') = c \cdot R_1(s, a, s') \) for some positive constant \( c \). We will also combine these transformations. For example, we say that \( R_1 \) and \( R_2 \) differ by potential shaping and \( S' \)-redistribution if it is possible to produce \( R_2 \) from \( R_1 \) by applying potential shaping and \( S' \)-redistribution (in any order). The cases where \( R_1 \) and \( R_2 \) differ by (for example) potential shaping and positive linear scaling, etc. are defined analogously. Finally, we will use the following result, proven by Skalse & Abate (2023) in their Theorem 2.6: **Proposition 1.** \((S, A, \tau, \mu_0, R_1, \gamma)\) and \((S, A, \tau, \mu_0, R_2, \gamma)\) have the same ordering of policies if and only if \( R_1 \) and \( R_2 \) differ by potential shaping, positive linear scaling, and \( S' \)-redistribution. The “ordering of policies” is the ordering induced by the policy evaluation function \( J \). EPIC (Gleave et al., 2020) is defined relative to a distribution \( D_S \) over \( S \) and a distribution \( D_A \) over \( A \), which must give support to all states and actions. It is computed in several steps. First, let \( C^{\text{EPIC}} : \mathcal{R} \to \mathcal{R} \) be the function where \( C^{\text{EPIC}}(R)(s, a, s') \) is equal to \[ R(s, a, s') + \mathbb{E}[\gamma R(s', A, S') - R(s, A, S') - \gamma R(S, A, S')], \] where \( S, S' \sim D_S \) and \( A \sim D_A \). Note that \( S \) and \( S' \) are sampled independently. Next, let the “Pearson distance” between two random variables \( X \) and \( Y \) be defined as \( \sqrt{(1 - \rho(X, Y))/2} \), where \( \rho \) denotes the Pearson correlation. Then the EPIC-distance \( D^{\text{EPIC}}(R_1, R_2) \) is defined to be the Pearson distance between \( C^{\text{EPIC}}(R_1)(S, A, S') \) and \( C^{\text{EPIC}}(R_2)(S, A, S') \), where again \( S, S' \sim D_S \) and \( A \sim D_A \). Note that \( D^{\text{EPIC}} \) is implicitly parameterised by \( D_S \) and \( D_A \). To better understand how EPIC works, it is useful to know that it can be equivalently expressed as \[ D^{\text{EPIC}}(R_1, R_2) = \frac{1}{2} \cdot L_{2,D} \left( \frac{C^{\text{EPIC}}(R_1)}{L_{2,D}(C^{\text{EPIC}}(R_1))}, \frac{C^{\text{EPIC}}(R_2)}{L_{2,D}(C^{\text{EPIC}}(R_2))} \right), \] where \( L_{2,D} \) is a weighted \( L_2 \)-norm. For details, see Appendix E. Here \( C^{\text{EPIC}} \) maps all reward functions that differ by potential shaping to a single representative in their equivalence class. This, combined with the normalisation step, ensures that reward functions which only differ by potential shaping and positive linear scaling have distance 0 under \( D^{\text{EPIC}} \). DARD (Wulfe et al., 2022) is also defined relative to a distribution \( D_S \) over \( S \) and a distribution \( D_A \) over \( A \), which must give support to all actions and all reachable states, but it also requires a transition function \( \tau \). Let \( C^{\text{DARD}} : \mathcal{R} \to \mathcal{R} \) be the function where \( C^{\text{DARD}}(R)(s, a, s') \) is \[ R(s, a, s') + \mathbb{E}[\gamma R(s', A, S'') - R(s, A, S') - \gamma R(S', A, S'')], \] where \( A \sim D_A \), \( S' \sim \tau(s, A) \), and \( S'' \sim \tau(s', A) \). Then the DARD-distance \( D^{\text{DARD}}(R_1, R_2) \) is defined to be the Pearson distance between \( C^{\text{DARD}}(R_1)(S, A, S') \) and \( C^{\text{DARD}}(R_2)(S, A, S') \), where again \( S, S' \sim D_S \) and \( A \sim D_A \). Note that \( D^{\text{DARD}} \) is parameterised by \( D_S, D_A, \) and \( \tau \). ## 2 STARC METRICS In this section we formally define STARC metrics, and provide several examples of such metrics. ### 2.1 A Formal Definition of STARC Metrics STARC metrics are defined relative to an environment, consisting of a set of states \( S \), a set of actions \( A \), a transition function \( \tau \), an initial state distribution \( \mu_0 \), and a discount factor \( \gamma \). This means that many of our definitions and theorems are implicitly parameterised by these objects, even when this dependency is not spelled out explicitly. Our results hold for any choice of \( S, A, \tau, \mu_0, \) and \( \gamma \), as long as they satisfy the assumptions given in Section 1.2. See also Section 2.3. STARC metrics are computed in several steps, where the first steps collapse certain equivalence classes in \( \mathcal{R} \) to a single representative, and the last step measures a distance. The reason for this is that two distinct reward functions can share the exact same preferences between all policies. When this is the case, we want them to be treated as equivalent. This is achieved by standardising the reward functions in various ways before the distance is finally measured. First, recall that neither potential shaping nor \( S' \)-redistribution affects the policy ordering in any way. This motivates the first step: --- Gleave et al. (2020) allow different distributions to be used when computing \( C^{\text{EPIC}}(R) \) and when taking the Pearson distance. However, doing this breaks some of their theoretical results. For details, see Appendix E. Definition 1. A function \( c : \mathcal{R} \to \mathcal{R} \) is a canonicalisation function if \( c \) is linear, \( c(R) \) and \( R \) only differ by potential shaping and \( S' \)-redistribution for all \( R \in \mathcal{R} \), and for all \( R_1, R_2 \in \mathcal{R} \), \( c(R_1) = c(R_2) \) if and only if \( R_1 \) and \( R_2 \) only differ by potential shaping and \( S' \)-redistribution. Note that we require \( c \) to be linear. Note also that \( C^{\text{EPIC}} \) and \( C^{\text{DARD}} \) are not canonicalisation functions in our sense, because we here require canonicalisation functions to simultaneously standardise both potential shaping and \( S' \)-redistribution, whereas \( C^{\text{EPIC}} \) and \( C^{\text{DARD}} \) only standardise potential shaping. In Section 2.2, we provide examples of canonicalisation functions. Let us next introduce the functions that we use to compute a distance: Definition 2. A metric \( m : \mathcal{R} \times \mathcal{R} \to \mathbb{R} \) is admissible if there exists a norm \( p \) and two (positive) constants \( u, \ell \) such that \( \ell \cdot p(x,y) \leq m(x,y) \leq u \cdot p(x,y) \) for all \( x, y \in \mathcal{R} \). A metric is admissible if it is bilipschitz equivalent to a norm. Any norm is an admissible metric, though there are admissible metrics which are not norms.\(^3\) Recall also that all norms are bilipschitz equivalent on any finite-dimensional vector space. This means that if \( m \) satisfies Definition 2 for one norm, then it satisfies it for all norms. We can now define our class of reward metrics: Definition 3. A function \( d : \mathcal{R} \times \mathcal{R} \to \mathbb{R} \) is a STARC metric (STAndardised Reward Comparison) if there is a canonicalisation function \( c \), a function \( n \) that is a norm on \( \text{Im}(c) \), and a metric \( m \) that is admissible on \( \text{Im}(s) \), such that \( d(R_1,R_2) = m(s(R_1), s(R_2)) \), where \( s(R) = c(R)/n(c(R)) \) when \( n(c(R)) \neq 0 \), and \( c(R) \) otherwise. Intuitively speaking, \( c \) ensures that all reward functions which differ by potential shaping and \( S' \)-redistribution are considered to be equivalent, and division by \( n \) ensures that positive scaling is ignored as well. Note that if \( n(c(R)) = 0 \), then \( c(R) \) assigns 0 reward to every transition. Note also that \( \text{Im}(c) \) is the image of \( c \), if \( c \) is applied to the entirety of \( \mathcal{R} \). If \( n \) is a norm on \( \mathcal{R} \), then \( n \) is also a norm on \( \text{Im}(c) \), but there are functions which are norms on \( \text{Im}(c) \) but not on \( \mathcal{R} \) (c.f. Proposition 4). In Appendix C, we provide a geometric intuition for how STARC metrics work. 2.2 Examples of STARC Metrics In this section, we give several examples of STARC metrics. We begin by showing how to construct canonicalisation functions. We first give a simple and straightforward method: Proposition 2. For any policy \( \pi \), the function \( c : \mathcal{R} \to \mathcal{R} \) given by \[ c(R)(s,a,s') = \mathbb{E}_{S' \sim \tau(s,a)} \left[ R(s,a,S') - V^\pi(s) + \gamma V^\pi(S') \right] \] is a canonicalisation function. Here \( V^\pi \) is computed under the reward function \( R \) given as input to \( c \). We call this function Value-Adjusted Levelling (VAL). The proof, as well as all other proofs, are given in the Appendix. Proposition 2 gives us an easy way to make canonicalisation functions, which are also easy to evaluate whenever \( V^\pi \) is easy to approximate. We next give another example of canonicalisation functions: Definition 4. A canonicalisation function \( c \) is minimal for a norm \( n \) if for all \( R \) we have that \( n(c(R)) \leq n(R') \) for all \( R' \) such that \( R \) and \( R' \) only differ by potential shaping and \( S' \)-redistribution. Minimal canonicalisation functions give rise to tighter regret bounds (c.f. Section 3 and Appendix F). It is not a given that minimal canonicalisation functions exist for a given norm \( n \), or that they are unique. However, for any weighted \( L_2 \)-norm, this is the case: Proposition 3. For any weighted \( L_2 \)-norm, a minimal canonicalisation function exists and is unique. A STARC metric can use any canonicalisation function \( c \). Moreover, the normalisation step can use any function \( n \) that is a norm on \( \text{Im}(c) \). This does of course include the \( L_1 \)-norm, \( L_2 \)-norm, \( L_\infty \)-norm, and so on. We next show that \( \max_\pi J(\pi) - \min_\pi J(\pi) \) also is a norm on \( \text{Im}(c) \): Proposition 4. If \( c \) is a canonicalisation function, then the function \( n : \mathcal{R} \to \mathcal{R} \) given by \( n(R) = \max_\pi J(\pi) - \min_\pi J(\pi) \) is a norm on \( \text{Im}(c) \). \(^3\)For example, the unit ball of \( m \) does not have to be convex, or symmetric around the origin. For the final step we of course have that any norm is an admissible metric, though some other metrics are admissible as well.\footnote{For example, if \( m(x, y) \) is the angle between \( x \) and \( y \) when \( x, y \neq 0 \), and we define \( m(0, 0) = 0 \) and \( m(x, 0) = \pi/2 \) for \( x \neq 0 \), then \( m \) is also admissible, even though \( m \) is not a norm.} To obtain a STARC metric, we then pick any canonicalisation function \( c \), norm \( n \), and admissible metric \( m \), and combine them as described in Definition 3. Which choice of \( c, n, \) and \( m \) is best in a given situation may depend on multiple considerations, such as how easy they are to compute, how easy they are to work with theoretically, or how well they together track worst-case regret (c.f. Section 3 and 4). ### 2.3 Unknown Transition Dynamics and Continuous Environments STARC metrics depend on the transition function \( \tau \), through the definition of canonicalisation functions (since \( S' \)-redistribution depends on \( \tau \)). Moreover, \( \tau \) is often unknown in practice. However, it is important to note that while STARC metrics depend on \( \tau \), there are STARC metrics that can be computed without direct access to \( \tau \). For example, the VAL canonicalisation function (Proposition 2) only requires that we can sample from \( \tau \), which is always possible in the reinforcement learning setting. Moreover, if we want to evaluate a learnt reward function in an environment that is different from the training environment, then we can simply use the \( \tau \) from the evaluation environment. As such, we do not consider the dependence on \( \tau \) to be a meaningful limitation. Nonetheless, it is possible to define STARC-like pseudometrics that do not depend on \( \tau \) at all, and such pseudometrics also have some theoretical guarantees (albeit guarantees that are weaker than those enjoyed by STARC metrics). This option is discussed in Appendix F.3. Moreover, we assume that \( S \) and \( A \) are finite, but many interesting environments are continuous. However, it is important to note that while our theoretical results assume that \( S \) and \( A \) are finite, it is still straightforward to compute and use STARC metrics in continuous environments (for example, using the VAL canonicalisation function from Proposition 2). We discuss this issue in more detail in Appendix D. In Section 4, we also provide experimental data from a continuous environment. ### 3 Theoretical Results In this section, we prove that STARC metrics enjoy several desirable theoretical guarantees. First, we note that all STARC metrics are pseudometrics on the space of all reward functions, \( R \): **Proposition 5.** All STARC metrics are pseudometrics on \( R \). This means that STARC metrics give us a well-defined notion of a “distance” between rewards. Next, we characterise the cases when STARC metrics assign two rewards a distance of zero: **Proposition 6.** All STARC metrics have the property that \( d(R_1, R_2) = 0 \) if and only if \( R_1 \) and \( R_2 \) induce the same ordering of policies. This means that STARC metrics consider two reward functions to be equivalent, exactly when those reward functions induce exactly the same ordering of policies. This is intuitive and desirable. For a pseudometric \( d \) on \( R \) to be useful, it is crucial that it induces an upper bound on worst-case regret. Specifically, we want it to be the case that if \( d(R_1, R_2) \) is small, then the impact of using \( R_2 \) instead of \( R_1 \) should also be small. When a pseudometric has this property, we say that it is sound: **Definition 5.** A pseudometric \( d \) on \( R \) is sound if there exists a positive constant \( U \), such that for any reward functions \( R_1 \) and \( R_2 \), if two policies \( \pi_1 \) and \( \pi_2 \) satisfy that \( J_2(\pi_2) \geq J_2(\pi_1) \), then \[ J_1(\pi_1) - J_1(\pi_2) \leq U \cdot (\max_\pi J_1(\pi) - \min_\pi J_1(\pi)) \cdot d(R_1, R_2). \] Let us unpack this definition. \( J_1(\pi_1) - J_1(\pi_2) \) is the regret, as measured by \( R_1 \), of using policy \( \pi_2 \) instead of \( \pi_1 \). Division by \( \max_\pi J_1(\pi) - \min_\pi J_1(\pi) \) normalises this quantity based on the total range of \( R_1 \) (though the term is put on the right-hand side of the inequality, instead of being used as a denominator, in order to avoid division by zero when \( \max_\pi J_1(\pi) - \min_\pi J_1(\pi) = 0 \)). The condition that \( J_2(\pi_2) \geq J_2(\pi_1) \) says that \( R_2 \) prefers \( \pi_2 \) over \( \pi_1 \). Taken together, this means that a pseudometric \( d \) on \( R \) is sound if \( d(R_1, R_2) \) gives an upper bound on the maximal regret that could be incurred under $R_1$ if an arbitrary policy $\pi_1$ is optimised to another policy $\pi_2$ according to $R_2$. It is also worth noting that this includes the special case when $\pi_1$ is optimal under $R_1$ and $\pi_2$ is optimal under $R_2$. Our first main result is that all STARC metrics are sound: **Theorem 1.** All STARC metrics are sound. This means that any STARC metric gives us an upper bound on worst-case regret. Next, we will show that STARC metrics also induce a lower bound on worst-case regret. It may not be immediately obvious why this property is desirable. To see why this is the case, note that if a pseudometric $d$ on $\mathcal{R}$ does not induce a lower bound on worst-case regret, then there are reward functions that have a low worst-case regret, but a large distance under $d$. This would in turn mean that $d$ is not tight, and that it should be possible to improve upon it. In other words, if we want a small distance under $d$ to be both sufficient and necessary for low worst-case regret, then $d$ must induce both an upper and a lower bound on worst-case regret. As such, we also introduce the following definition: **Definition 6.** A pseudometric $d$ on $\mathcal{R}$ is complete if there exists a positive constant $L$, such that for any reward functions $R_1$ and $R_2$, there exist two policies $\pi_1$ and $\pi_2$ such that $J_2(\pi_2) \geq J_2(\pi_1)$ and $$J_1(\pi_1) - J_1(\pi_2) \geq L \cdot (\max_\pi J_1(\pi) - \min_\pi J_1(\pi)) \cdot d(R_1, R_2),$$ and moreover, if both $\max_\pi J_1(\pi) - \min_\pi J_1(\pi) = 0$ and $\max_\pi J_2(\pi) - \min_\pi J_2(\pi) = 0$, then we have that $d(R_1, R_2) = 0$. The last condition is included to rule out certain pathological edge-cases. Intuitively, if $d$ is sound, then a small $d$ is sufficient for low regret, and if $d$ is complete, then a small $d$ is necessary for low regret. Soundness implies the absence of false positives, and completeness the absence of false negatives. Our second main result is that all STARC metrics are complete: **Theorem 2.** All STARC metrics are complete. Theorems 1 and 2 together imply that, for any STARC metric $d$, we have that a small value of $d$ is both necessary and sufficient for a low regret. This means that STARC metrics, in a certain sense, exactly capture what it means for two reward functions to be similar, and that we should not expect it to be possible to significantly improve upon them. We can make this claim formal as follows: **Proposition 7.** Any pseudometrics on $\mathcal{R}$ that are both sound and complete are bilipschitz equivalent. This implies that all STARC metrics are bilipschitz equivalent. Moreover, any other pseudometric on $\mathcal{R}$ that induces both an upper and a lower bound on worst-case regret (as we define it) must also be bilipschitz equivalent to STARC metrics. In Appendix A and B, we provide an extensive analysis of both EPIC and DARD, and show that they fail to induce similar theoretical guarantees. ### 4 EXPERIMENTAL RESULTS In this section we present our experimental results. First, we demonstrate that STARC metrics provide a better estimate of regret than EPIC and DARD in randomly generated MDPs. We then evaluate a STARC metric in a continuous environment. #### 4.1 LARGE NUMBERS OF SMALL RANDOM MDPs Our first experiment compares several STARC metrics to EPIC, DARD, and a number of other non-STARC baselines. In total, our experiment covered 223 different pseudometrics (including rollout regret), derived by creating different combinations of canonicalisation functions, normalisations, and distance metrics. For details, see Appendix G.3. For each pseudometric, we generated a large number of random MDPs, and then measured how well the pseudometric correlates with regret across this distribution. The regret is defined analogously to Definition 5 and 6 except that only optimal policies are considered – for details, see Appendix G.2. We used MDPs with 32 states, 4 actions, $\gamma = 0.95$, a uniform initial state distribution, and randomly sampled sparse non-deterministic transition functions, and for each MDP, we generated several random reward functions. For details on the random generation process, see Appendix G. We compared 49,152 reward function pairs (Appendix G.4), and used these to estimate how well each pseudometric correlates with regret. We show these correlations in Figure 1 and the full data is given in a table in Appendix H. In Appendix H.1, we also provide tables that indicate the impact of changing the metric $m$ or the normalisation function $n$. The canonicalisation functions we used were None (which simply skips the canonicalisation step), $C_{\text{EPIC}}$, $C_{\text{DARD}}$, MinimalPotential (which is the minimal “canonicalisation” that removes potential shaping but not $S'$-redistribution, and therefore is easier to compute), VALPotential (which is given by $R(s,a,s') - V^\pi(s) + \gamma V^\pi(s')$), and VAL (defined in Proposition 2). For both $C_{\text{EPIC}}$ and $C_{\text{DARD}}$, both $\mathcal{D}_S$ and $\mathcal{D}_A$ were chosen to be uniform over $S$ and $A$. For both VALPotential and VAL, $\pi$ was chosen to be the uniformly random policy. Note that VAL is the only canonicalisation which removes both potential shaping and $S'$-redistribution, and thus the only one that meets Definition 1— for this reason, it is listed as “STARC-VAL,” in Figure 1. For the full details about which pseudometrics were chosen, and why, see Appendix G.3. Figure 1: This figure displays the correlation to regret for several pseudometrics. Each point represents one pseudometric, i.e. one unique combination of canonicalisation $c$, normalisation $n$, and distance metric $m$. They are grouped together based on their canonicalisation function, with each column corresponding to a different canonicalisation function. Pseudometrics which skip canonicalisation or normalisation are shown in grey. The versions of EPIC and DARD that use the $L_2$ norm for both normalisation $n$ and distance metric $m$ are highlighted in red, as these are the original versions given in Gleave et al. (2020) and Wulfe et al. (2022). The STARC metrics, which are canonicalised using VAL, are reliably better indicators of regret than the other pseudometrics. As we can see, the STARC metrics based on VAL perform noticeably better than all pre-existing pseudometrics – for instance, the correlation of EPIC to regret is 0.778, DARD’s correlation is 0.782, while VAL’s correlation is 0.856 (when using $L_2$ for both $n$ and $m$, which is the same as EPIC and DARD). Out of the 10 best pseudometrics, 8 use VAL (and the other 2 both use VALpotential). Moreover, for each choice of $n$ and $m$, we have that the VAL canonicalisation performs better than the EPIC canonicalisation in 40 out of 42 cases. Taken together, these results suggest that STARC metrics robustly perform better than the existing alternatives. Our results also suggest that the choice of normalisation function $n$ and metric $m$ can have a significant impact on the pseudometric’s accuracy. For instance, when canonicalising with VAL, it is better to use the $L_1$ norm than the $L_2$ norm for both normalisation and taking the distance – this increases the correlation with regret from 0.856 to 0.873. Another example is the EPIC canonicalisation – when paired with the weighted $L_\infty$ norm for normalisation and the (unweighted) $L_\infty$ norm for taking the distance, instead of using the $L_2$ norm for both, its correlation decreases from 0.778 to 0.052. As we can see in Figure 1, this effect appears to be more prominent for the non-STARC metrics. Another thing to note is that it seems like VALpotential can perform as well as VAL despite not canonicalising for $S'$-redistribution, but only when a ($\tau$-)weighted norm is used. This may be because $\tau$-weighted norms set all impossible transitions to 0, and reduce the impact of very unlikely transitions; plausibly, this could in practice be similar to canonicalising for $S'$-redistribution. When using VAL, $L_1$ was the best unweighted norm for both $m$ and $n$ in our experiment. The only exceptions are when no normalisation is used and $m = L_\infty$, and when $n = \text{weighted}-L_2$ and $m = \text{weighted}-L_\infty$. However, in the first case, both the EPIC-based and the VAL-based pseudometric perform badly (since no normalisation is used), and in the second case, the difference between them is not large. 4.2 The Reacher Environment Our next experiment estimates the distance between several hand-crafted reward functions in the Reacher environment from MuJoCo (Todorov et al., 2012). This is a deterministic environment with an 11-dimensional continuous state space and a 2-dimensional continuous action space. The reward functions we used are: 1. **GroundTruth**: The Euclidean distance to the target, plus a penalty term for large actions. 2. **PotentialShaped**: GroundTruth with random potential shaping. 3. **SecondPeak**: We create a second target in the environment, and reward the agent based on both its distance to this target, and to the original target, but give a greater weight to the original target. 4. **Random**: A randomly generated reward, implemented as an affine transformation from \( s, a, s' \) to real numbers with the weights and bias randomly initialised. 5. **Negative**: Returns \(-\text{GroundTruth}\). We expect GroundTruth to be equivalent to PotentialShaped, similar to SecondPeak, orthogonal to Random, and opposite to Negative. We used the VAL canonicalisation function with the uniform policy, and normalised and took the distance with the \( L_2 \)-norm. This pseudometric was then estimated through sampling; full details can be found in Appendix D and I. The results of this experiment are given in Table 1. As we can see, the relative ordering of the reward functions match what we expect. However, the magnitudes of the estimated distances are noticeably larger than their real values; for example, the actual distance between GroundTruth and PotentialShaped is 0, but it is estimated as \( \approx 0.9 \). The reason for this is likely that the estimation involves summing over absolute values, which makes all noise positive. Nonetheless, for the purposes of ranking the rewards, this is not fundamentally problematic. | PotentialShaped | SecondPeak | Random | Negative | |-----------------|------------|--------|----------| | 0.8968 | 1.2570 | 1.3778 | 1.706 | Table 1: This figure displays the estimated distance (using \( c = \text{VAL}, n = L_2, \) and \( m = L_2 \)) between each reward function in the Reacher environment and the GroundTruth reward function. 5 Discussion We have introduced STARC metrics, and demonstrated that they provide compelling theoretical guarantees. In particular, we have shown that they are both sound and complete, which means that they induce both an upper and a lower bound on worst-case regret. As such, a small STARC distance is both necessary and sufficient to ensure that two reward functions induce a similar ordering of policies. Moreover, any two pseudometrics that are both sound and complete must be bilipschitz equivalent. This means that any pseudometric on \( R \) that has the same theoretical guarantees as STARC metrics must be equivalent to STARC metrics. This means that we have provided what is essentially a complete answer to the question of how to correctly measure the distance between reward functions. Moreover, our experiments show that STARC metrics have a noticeably better empirical performance than any existing pseudometric in the current literature, for a wide range of environments. This means that STARC metrics offer direct practical advantages, in addition to their theoretical guarantees. In addition to this, STARC metrics are both easy to compute, and easy to work with mathematically. As such, STARC metrics will be useful for both empirical and theoretical work on the analysis and evaluation of reward learning algorithms. Our work can be extended in a number of ways. First of all, it would be desirable to establish more conclusively which STARC metrics work best in practice. Our experiments are indicative, but not conclusive. Secondly, our theoretical results assume that \( S \) and \( A \) are finite; it would be desirable to generalise them to continuous environments. Third, we use a fairly strong definition of regret. We could consider some weaker criterion, that may allow for the creation of more permissive reward metrics. Finally, our work considers the MDP setting – it would be interesting to also consider other classes of environments. We believe that the multi-agent setting would be of particular interest, since it introduces new and more complex dynamics that are not present in the case of MDPs. ACKNOWLEDGEMENTS The authors wish to acknowledge and thank the financial support of the UK Research and Innovation (UKRI) [Grant ref EP/S022937/1] and the University of Bristol. REFERENCES Paul Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences, 2017. Tom Everitt, Victoria Krakovna, Laurent Orseau, Marcus Hutter, and Shane Legg. Reinforcement learning with a corrupted reward channel. CoRR, abs/1705.08417, 2017. URL http://arxiv.org/abs/1705.08417 Eugene A. Feinberg and Uriel G. Rothblum. Splitting randomized stationary policies in total-reward markov decision processes. Mathematics of Operations Research, 37(1):129–153, 2012. ISSN 0364765X, 15265471. URL http://www.jstor.org/stable/41412346 Adam Gleave, Michael Dennis, Shane Legg, Stuart Russell, and Jan Leike. Quantifying differences in reward functions, 2020. URL https://arxiv.org/abs/2006.13900 Borja Ibarz, Jan Leike, Tobias Pohlen, Geoffrey Irving, Shane Legg, and Dario Amodei. Reward learning from human preferences and demonstrations in Atari. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, volume 31, pp. 8022–8034, Montréal, Canada, 2018. Curran Associates, Inc., Red Hook, NY, USA. Erik Jenner, Herke van Hoof, and Adam Gleave. Calculus on MDPs: Potential shaping as a gradient, 2022. URL https://arxiv.org/abs/2208.09570 Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization, 2019. Andrew Y Ng and Stuart Russell. Algorithms for inverse reinforcement learning. In Proceedings of the Seventeenth International Conference on Machine Learning, volume 1, pp. 663–670, Stanford, California, USA, 2000. Morgan Kaufmann Publishers Inc. Andrew Y Ng, Daishi Harada, and Stuart Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In Proceedings of the Sixteenth International Conference on Machine Learning, pp. 278–287, Bled, Slovenia, 1999. Morgan Kaufmann Publishers Inc. Alexander Pan, Kush Bhatia, and Jacob Steinhardt. The effects of reward misspecification: Mapping and mitigating misaligned models, 2022. URL https://arxiv.org/abs/2201.03544 Gavin A Rummery and Mahesan Niranjan. On-line Q-learning using connectionist systems, volume 37. University of Cambridge, Department of Engineering Cambridge, UK, 1994. Stuart Russell and Peter Norvig. Artificial Intelligence: A Modern Approach. Pearson, 4 edition, 2020. Joar Skalse and Alessandro Abate. Misspecification in inverse reinforcement learning, 2023. Joar Skalse, Matthew Farrugia-Roberts, Stuart Russell, Alessandro Abate, and Adam Gleave. Invariance in policy optimisation and partial identifiability in reward learning. arXiv preprint arXiv:2203.07475, 2022a. Joar Skalse, Niki Howe, Krasheninnikov Dima, and David Krueger. Defining and characterizing reward hacking. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, 2022b. Richard S Sutton and Andrew G Barto. Reinforcement Learning: An Introduction. MIT Press, second edition, 2018. ISBN 9780262352703. Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033, 2012. doi: 10.1109/IROS.2012.6386109.
LAEd3kHao9
The proposed languageinformed distributions (LID) can effectively avoid the issue of intra-class variety. However, the authors would better also intepret how to solve the issue of inter-class correlation.
Promoting Language-Informed Distribution for Compositional Zero-Shot Learning Anonymous authors Paper under double-blind review Abstract Compositional zero-shot learning (CZSL) task aims to recognize unseen compositional visual concepts, e.g., sliced tomatoes, where the model is learned only from the seen compositions, e.g., sliced potatoes and red tomatoes. Thanks to the prompt tuning on large pre-trained visual language models such as CLIP, recent literature shows impressively better CZSL performance than traditional vision-based methods. However, the key aspects that impact the generalization to unseen compositions, including the diversity and informativeness of class context, and the entanglement between visual primitives, i.e., state and object, are not properly addressed in existing CLIP-based CZSL literature. In this paper, we propose a model by prompting the language-informed distribution, aka., PLID, for the CZSL task. Specifically, the PLID leverages pre-trained large language models (LLM) to 1) formulate the language-informed class distributions which are diverse and informative, and 2) enhance the compositionality of the class embedding. Moreover, a visual-language primitive decomposition (VLPD) module and a stochastic logit mixup (SLM) strategy are proposed to dynamically fuse the decisions from the compositional and the primitive logit space. Orthogonal to the existing literature of soft, hard, or distributional prompts, our method advocates prompting the LLM-supported class distribution that leads to a better zero-shot generalization. Experimental results on MIT-States, UT-Zappos, and C-GQA datasets show the superior performance of the PLID to the prior arts. The code and models will be publicly released. 1 Introduction Compositional visual recognition is a fundamental characteristic of human intelligence (Lake et al., 2017) but it is challenging for modern deep learning systems. For example, humans can easily recognize unseen sliced tomatoes after seeing sliced potatoes and red tomatoes. Such a compositional zero-shot learning (CZSL) capability is valuable in that, novel visual concepts from a huge combinatorial semantic space could be recognized without “seeing” any of their training data. For example, C-GQA (Naeem et al., 2021) dataset contains 413 states and 674 objects. This implies a total of at least 278K compositional classes in an open world while only 2% of them are accessible in training. Therefore, CZSL can significantly reduce the need for large-scale training data. Traditional vision-based methods either directly learn the visual feature of compositions, or try to first decompose the visual data into representations of simple primitives, i.e., states and objects, and then learn to re-compose the compositions (Misra et al., 2017; Atzmon et al., 2020; Zou et al., 2020; Huynh & Elhamifar, 2020; Karthik et al., 2022; Tokmakov et al., 2019; Naeem et al., 2021; Zhang et al., 2022b; Mancini et al., 2021; Li et al., 2022). Thanks to the recent large pre-trained vision-language models (VLM) such as CLIP (Radford et al., 2021), recent state-of-the-art CZSL methods have been developed (Nayak et al., 2023; Lu et al., 2023; Xu et al., 2022; Huang et al., 2023). For instance, CSP (Nayak et al., 2023) inherits the hard prompt template of the CLIP, i.e., a photo of [state][object] where only the embeddings of the state-object pairs are trained. The following methods (Lu et al., 2023; Xu et al., 2022; Huang et al., 2023) use soft prompt introduced in CoOp (Zhou et al., 2022b), where the embeddings of the prompt template are jointly optimized, leading to a better CZSL performance. The impressive performance of CLIP-based CZSL methods benefits from the sufficiently good feature alignment between the image and text modalities, and the prompting techniques for adapting the aligned features to recognizing compositional classes. Despite the success of existing CLIP-based methods, we find several key considerations to prompt the pre-trained CLIP for better CZSL modeling. First, the diversity and informativeness of prompts are both important to distinguish between compositional classes. CZSL can be treated as zero-shot learning on fine-grained categories, which requires a fine-grained context to prompt the CLIP model (Radford et al., 2021; Lu et al., 2022). However, to contextualize a class with fine granularity, the hard prompt in Radford et al. (2021) suffers from the heuristic design of prompt templates, and a single prompt for each class lacks diversity to capture the intra-class variance of visual data (Fig. 1a). Though the ProDA (Lu et al., 2022) proposes to learn a collection of prompts that formulate class-specific distribution to address the diversity, the lack of language informativeness in their prompts limits their performance on fine-grained compositional categories. Second, the entanglement between visual primitives, e.g., red and tomatoes in Fig. 1b, incurs difficulty in learning decomposable visual representations that are useful for compositional generalization (Liu et al., 2022; Karthik et al., 2022), while such a capability is missing in (Nayak et al., 2023; Xu et al., 2022). Though the more recent work (Lu et al., 2023; Huang et al., 2023) learn to decompose the primitives and considers the re-composed compositional predictions, their language-only decomposition and probability-level mixup potentially limit the generalizability in the open-world. In this paper, we propose a novel CLIP-based method for the CZSL task by prompting the language-informed distributions (\textit{PLID}) over both the compositional and primitive categories. To learn the diverse and informative textual class representations, the \textit{PLID} leverages off-the-shelf large language models (LLM) to build the class-specific distributions and to enhance the class embeddings. Furthermore, we propose a visual language primitive decomposition (VLPD) module to decompose the image data into simple primitives. Eventually, the compositional classification is enhanced by our stochastic logit mixup (SLM), which takes the merits of both the compositional and primitive recognitions. The proposed \textit{PLID} shows state-of-the-art performance on CZSL benchmarks such as MIT-States (Isola et al., 2015), UT-Zappos (Yu & Grauman, 2014), and C-GQA (Naeem et al., 2021). Note that our method is orthogonal to the existing hard prompt (Radford et al., 2021), soft prompt tuning (Zhou et al., 2022b), and prompt distribution learning (Lu et al., 2022; Kwon et al., 2023; Liu et al., 2023; Derakhshani et al., 2023). We advocate prompting the distribution of informative LLM-based class descriptions. From a classification perspective, this is grounded on the classification-by-description (Menon & Vondrick, 2023; Maniparambil et al., 2023; Yan et al., 2023; He et al., 2023), that LLM-generated text enables more informative class representations. Compared to the deterministic soft/hard prompt aforementioned, our distribution modeling could capture the intra-class diversity for better zero-shot generalization. Compared to the existing prompt distribution learning approaches, the class context is more linguistically interpretable and provides fine-grained descriptive information about the class. Our method is also parameter-efficient without the need to optimize a large collection of prompts. Specific to the CZSL task, the enhanced class embeddings by LLM descriptions enable visual language primitive decomposition and decision fusion in both compositional and primitive space, which eventually benefits the generalization to the unseen. In summary, the contributions are as follows. a) We develop a \textit{PLID} method that advocates prompting the language-informed distribution for compositional zero-shot learning, which is orthogonal to existing soft/hard and distributional prompt learning. b) We propose primitive decomposition and stochastic logit mixup to fuse the classification decision from compositional and primitive predictions. c) We empirically show that \textit{PLID} could achieve superior performance to prior arts in both the closed-world and open-world settings on MIT-States, UT-Zappos, and C-GQA datasets. 2 RELATED WORK Prompt Learning in VLM Vision-Language Models (VLM) such as the CLIP (Radford et al., 2021) pre-trained on web-scale datasets recently gained substantial attention for their strong zero-shot recognition capability on various downstream tasks. Such a capability is typically achieved by performing prompt engineering to adapt pre-trained VLMs. Early prompting technique such as the hard prompt in CLIP uses the heuristic template “a photo of [CLS]” as the textual input. Recently, the soft prompt tuning method in CoOp (Zhou et al., 2022b), CoCoOp (Zhou et al., 2022a), and ResPT (Razdaibiedina et al., 2023) that uses learnable embedding as the textual context of class names significantly improved the model adaptation performance. This technique is further utilized in MaPLe (Khattak et al., 2023) that enables multi-modal prompt learning for both image and text. However, the prompts of these methods are deterministic and lack the diversity to capture the appearance variety in fine-grained visual data, so they are prone to overfitting the training data. To handle this issue, ProDA (Lu et al., 2022) explicitly introduces a collection of soft prompts to construct the class-specific Gaussian distribution, which results in better zero-shot performance and inspires the recent success of PPL (Kwon et al., 2023) in the dense prediction task. Similarly, the PBPrompt (Liu et al., 2023) uses neural networks to predict the class-specific prompt distribution and utilizes optimal transport to align the stochastically sampled soft prompts and image patch tokens. The recent work (Derakhshani et al., 2023) assumes the latent embedding of prompt input follows a Gaussian prior and adopts variational inference to learn the latent distribution. In this paper, in order to take the merits of the informativeness of hard prompt and the diversity of distributional modeling, we adopt the soft prompt to adapt the distributions supported by LLM-generated class descriptions. Compositional Zero-Shot Learning (CZSL) For a long period, the CZSL task has been studied from a vision-based perspective in literature. They either directly learn the compositional visual features or disentangle the visual features into simple primitives, i.e., states and objects. For example, (Nagarajan & Grauman, 2018; Li et al., 2020; Naeem et al., 2021) performs a direct classification by projecting the compositional visual features into a common feature space, and (Lu et al., 2016; Misra et al., 2017; Atzmon et al., 2020; Huynh & Elhamifar, 2020; Zou et al., 2020; Karthik et al., 2022; Liu et al., 2022) decompose the visual feature into simple primitives so that the compositional recognition can be achieved by learning to recompose from the primitives. Though the recent large-scale pre-trained CLIP model shows impressive zero-shot capability, it is found to struggle to work well for compositional reasoning (Ma et al., 2023; Yuksekgonul et al., 2023; Lewis et al., 2022). Thanks to the recent prompt learning (Zhou et al., 2022b), the CZSL task has been dominated by CLIP-based approaches (Nayak et al., 2023; Lu et al., 2023; Xu et al., 2022; Huang et al., 2023). The common idea is to prompt the frozen CLIP model to separately learn the textual embeddings of simple primitives, which empirically show strong compositionality for zero-shot generalization. However, these methods tend to overfitting due to the lack of prompt diversity or language informativeness. In this paper, based on the frozen CLIP, we leverage LLMs to enhance the compositionality of text embeddings and propose to decompose both the image and text modalities for better compositional recognition in an open world. 3 PRELIMINARIES CZSL Task Formulation The CZSL task aims to recognize images of a compositional category $y \in C$, where the semantic space $C$ is a Cartesian product between the state space $S = \{s_1, \ldots, s_{|S|}\}$ and object space $O = \{o_1, \ldots, o_{|O|}\}$, i.e., $C = S \times O$. For example, as shown in Fig. 1, a model trained on images of red apple and sliced tomatoes needs to additionally recognize an image of sliced apple. In training, only a set of seen compositions is available. In closed-world testing, the model needs to recognize images from both the seen compositions in $C^{(s)}$ and the unseen compositions in $C^{(u)}$ that are assumed to be feasible, where the cardinality $|C^{(s)} \cup C^{(u)}| \ll |C|$ since most of the compositions in $C$ are practically not feasible. In open-world testing, the model needs to recognize images given any composition in $C$. VLMs for CZSL Large pre-trained VLMs such as CLIP (Radford et al., 2021) have recently been utilized by CSP (Nayak et al., 2023) for the CZSL task. The core idea of CSP is to represent the text embeddings of states in $S$ and objects in $O$ as learnable parameters and contextualize them with the hard prompt template “a photo of [s][o]” as the input of the CLIP text encoder, where $[s] \in S$ and Figure 2: Overview of PLID. The CZSL task is formulated to align the feature of image \( x \) with the learnable text features of compositional class \( y = (s,o) \) based on frozen CLIP (\( E_T \) and \( E_V \)). We propose the language-informed distributions (LID) which are constructed by the LLM-generated class descriptions and the soft prompts \( p_{1:L} \) for each state-object pair \( (s,o) \). The features of the image and text are enhanced by text and visual feature enhancement (TFE and VEF). Furthermore, we propose the visual language primitive decomposition (VLPD) module to recompose the compositional logits, which are further fused with the compositional logit between \( t_y \) and \( v \) by our stochastic logit mix-up (SLM). With the compositional and primitive recognition, our model is jointly trained by loss functions \( L_y(x,y), L_s(x,s), \) and \( L_o(x,o) \). \[ o \in O. \] Given an image \( x \), by using the cosine similarity (\( \cos \)) as the logit, the class probability of the composition \( y \) is defined as \( p_\theta(y|x) = \text{softmax}(\cos(v,t_y)), \) where \( \theta \) are the \( |S| + |O| \) learnable parameters, \( v \) and \( t_y \) are the image feature and class text embedding, respectively. In training, the prediction \( p_\theta(\hat{y}|x) \) is supervised by multi-class cross-entropy loss. In CZSL testing, a test image is recognized by finding the compositional class \( c \in C \) which has the maximum \( \cos(v,t_c) \). The CSP method is simple, parameter-efficient, and largely outperforms traditional approaches. However, due to the lack of diversity and informativeness in prompting, the zero-shot capability of CLIP is not fully exploited by CSP for the CZSL task. 4 PROPOSED METHOD Overview Fig. 2 shows an overview of the PLID. The basic idea is to use LLMs to generate sentence-level descriptions for each compositional class, and learn to prompt the class-wise text distributions (supported by the descriptions) to be aligned with image data. Besides, we introduce visual language primitive decomposition (VLPD) and stochastic logit mixup (SLM) to enable recognition at both compositional and primitive levels. In testing, an image is recognized by fusing the decisions from the directly predicted and the recomposed compositions. 4.1 PROMPTING LANGUAGE-INFORMED DISTRIBUTION Motivation To adapt the large pre-trained CLIP (Radford et al., 2021) to downstream tasks, recent distributional prompt learning (Lu et al., 2022; Kwon et al., 2023; Liu et al., 2023; Derakhshani et al., 2023) shows the importance of context diversity by distribution modeling for strong generalization. Motivated by the inherent fine-granularity of compositional recognition in the CZSL task, we argue that not only the context diversity but also the context informativeness by language modeling, are both important factors to adapt CLIP to the zero-shot learning task. The insight behind this is that the sentence-level descriptions could contextualize compositional classes in a more fine-grained manner than the prior arts. Therefore, we propose to address the two factors by learning to Prompt the Language-Informed Distributions (PLID) for the CZSL task. Compositional Class Description To generate diverse and informative text descriptions for each compositional class, we adopt a similar way as (Menon & Vondrick, 2023) by prompting an LLM that shows instruction-following capability. An example below shows the format of the LLM instruction. Keywords: sliced, potato, picture Output: The picture features a beautifully arranged plate of thinly sliced potatoes. ### See the Appendix B for more details. For each composition \( y = (s, o) \), we generate \( M \) descriptions denoted as \( S^{(y)} = \{S_1^{(y)}, \ldots, S_M^{(y)}\} \) where \( S_m^{(y)} \) is a linguistically complete sentence. Different to (Menon & Vondrick, 2023) that aims to interpret the zero-shot recognition by attribute phrases from LLMs, we utilize the LLM-based sentence-level descriptions in the CZSL task for two benefits: 1) provide diverse and informative textual context for modeling the class distributions that capture the intra-class variance, and 2) enhance the class embedding with fine-grained descriptive information. **Language-Informed Distribution (LID)** For both the image and text modalities, we use the frozen CLIP model and learnable feature enhancement modules to represent the visual and language features, which are also adopted in existing CZSL literature (Lu et al., 2023; Huang et al., 2023). Specifically, for the text modality, each composition \( y \) is tokenized and embedded by CLIP embedding layer and further prompted by concatenating with learnable context vectors, i.e., “\( p_1 \ldots p_L[s][o] \)”, where \( p_{1:L} \) is initialized by “a photo of” and shared with all classes. Followed by the frozen CLIP text encoder \( E_T \), the embedding of class \( y \) is \( q_y = E_T([p_1] \ldots [p_L][s][o]) \) where \( q_y \in \mathbb{R}^d \). Following the CZSL literature (Xu et al., 2022; Lu et al., 2023), here the soft prompt \( p_{1:L} \) and primitive embeddings \([s][o]\) are learnable while \( E_T \) is frozen in training. To simultaneously address the lack of diversity and informativeness of the soft prompts, we propose to formulate the class-specific distributions supported by the texts \( S^{(y)} \) and learn to prompt these distributions. Specifically, we encode \( S^{(y)} \) by the frozen CLIP text encoder: \( D^{(y)} = E_T(S^{(y)}) \), where \( D^{(y)} \in \mathbb{R}^{M \times d} \). Then, we use \( D^{(y)} \) to enhance \( q_y \) by \( t_y = \Psi_{TFE}(q_y, D^{(y)}) \) where \( \Psi_{TFE} \) is the text feature enhancement (TFE) implemented by cross attention (Vaswani et al., 2017). Similarly, given an image \( x \), to mitigate the loss of fine-grained cues, we augment it with \( N \) views to be \( X = \{x^{(1)}, \ldots, x^{(N)}\} \). Followed by the frozen CLIP visual encoder \( E_V \), the feature of \( x \) is enhanced by \( v = \Psi_{VFE}(E_V(x), E_V(X)) \) where \( \Psi_{VFE} \) is the visual feature enhancement (VFE) by cross attention. We treat the enhanced text feature \( t_y \) of class \( y \) as the class mean and \( t_y + D^{(y)} \) as the distribution support points (DSP) that follow the Gaussian \( \mathcal{N}(t_y, \Sigma_y) \). The motivation of \( t_y + D^{(y)} \) is to enable the flexibility of DSP to traverse around in the \( d \) dimensional space in training since \( t_y \) is trainable while \( D^{(y)} \) are pre-trained. For all \( |C^{(s)}| \) (denoted as \( C \)) seen compositional classes, we build joint Gaussian distributions \( \mathcal{N}(\mu_{1:C}, \Sigma_{1:C}) \) similar to ProDA (Lu et al., 2022), where the means \( \mu_{1:C} \in \mathbb{R}^{C \times d} \) are given by \( t_y \) over \( C \) classes, and the covariance \( \Sigma_{1:C} \in \mathbb{R}^{d \times C \times C} \) is defined across \( C \) classes for each feature dimension from DSP. **Remark**: Compared to the ProDA (Lu et al., 2022) that learns a collection of non-informative prompts, our DSPs are language-informed by \( D^{(y)} \) that provides more fine-grained descriptive information to help recognition and decomposition. Besides, our method is more parameter-efficient than ProDA since we only have a single soft prompt to learn. This is especially important for the CZSL task where there is a huge number of compositional classes. Lastly, we highlight the benefit of performing the intra- and inter-class covariance optimization induced by the learning objective of distribution modeling, which will be introduced below. **Learning Objective** Given the visual feature \( v \in \mathbb{R}^d \) of image \( x \) and the text embeddings \( t_{1:C} \) from class-wise joint distributions \( \mathcal{N}(\mu_{1:C}, \Sigma_{1:C}) \), according to the (Lu et al., 2022), minimizing the cross-entropy loss is equivalent to minimizing the upper bound of negative log-likelihood (NLL): \[ \text{NLL}(x, y) = -\log \mathbb{E}_{t_{1:C}} p(y|v, t_{1:C}) \leq -\log \frac{\exp(h_y/\tau)}{\sum_{k=1}^{C} \exp((h_k + h_{k,y}^{(m)})/\tau)} := \mathcal{L}_y(x, y), \] where the compositional logit \( h_y = \cos(v, t_y) \), the pairwise margin \( h_{k,y}^{(m)} = v^\top A_{k,y} v/(2\tau) \) and \( A \in \mathbb{R}^{d \times C \times C} \) is given by \( A_{k,y} = \Sigma_{kk} - \Sigma_{ky} - \Sigma_{yk} + \Sigma_{yy} \). The covariance \( A_{k,y} \) indicates the correlation between the \( k \)-th out of \( C \) classes and the target class \( y \) on each of \( d \) feature dimensions. The insight of minimizing \( \mathcal{L}_y(x, y) \) is illustrated in Fig. 3, which encourages minimizing intra-class variance by \( \Sigma_{yy} \) and \( \Sigma_{kk} \), and maximizing inter-class separability indicated by \( \Sigma_{ky} \) and \( \Sigma_{yk} \). In Appendix C, we discuss our workaround by covariance sharing when \( C \) is too large to compute \( A \). 4.2 Primitives Decomposition and Decision Fusion Motivation Considering the fundamental challenge in the CZSL task, that the visual primitives are inherently entangled in an image, an unseen composition in testing can be hardly identified if its object (or its state) embedding is overfitted to the visual data of seen compositions. To this end, it is better to inherit the benefits of the decompose-recompose paradigm (Zou et al., 2020; Karthik et al., 2022; Liu et al., 2022) by decomposing visual features into simple primitives, i.e., states and objects, from which the recomposed decision can be leveraged for zero-shot recognition. Thanks to the compositionality of CLIP (Wolff et al., 2023; Trager et al., 2023), such motivation can be achieved by the visual-language primitive decomposition (VLPD). See Fig. 4 and we explain it below. Based on VLPD, we propose the stochastic logit mixup to fuse the directly learned compositions and the recomposed ones. VLPD Specifically, we use two parallel neural networks $f_s$ and $f_o$ to decompose $v$ into the state visual feature $f_s(v)$ and object visual feature $f_o(v)$, respectively, under the supervision of text features. To get the supervision, we group $t_y$ over the subset $\mathcal{Y}_s$, in which all compositions share the same given object $o$ (see vertical ellipses in Fig. 4), and group $t_y$ over the subset $\mathcal{Y}_o$, in which all compositions share the same given state $s$ (see horizontal ellipses in Fig. 4). Thus, given a state $s$ and an object $o$, the predicted object logit $h_o$ and state logit $h_s$ are computed by $$h_s = \cos \left( f_s(v), \frac{1}{|\mathcal{Y}_s|} \sum_{y \in \mathcal{Y}_s} t_y \right), \quad h_o = \cos \left( f_o(v), \frac{1}{|\mathcal{Y}_o|} \sum_{y \in \mathcal{Y}_o} t_y \right).$$ Note that we use $f_s$ and $f_o$ to decompose visual features $v$, which is different from DFSP (Lu et al., 2023) that only decomposes the compositional logits. In experiments, we show the superiority of performing both visual and language decomposition in Table 5. Following the spirit of distribution modeling, we also introduce the distributions over state and object categories, where the corresponding DSP, denoted as $D^{(s)}$ and $D^{(o)}$, are obtained by grouping $D(y)$ over $\mathcal{Y}_s$ and $\mathcal{Y}_o$, respectively. This leads to the following upper-bounded cross-entropy losses: $$L_s(x, s) = -\log \frac{\exp(h_s/\tau)}{\sum_{k=1}^{|S|} \exp((h_k + h_{k,s}^{(m)})/\tau)}, \quad L_o(x, o) = -\log \frac{\exp(h_o/\tau)}{\sum_{k=1}^{|O|} \exp((h_k + h_{k,o}^{(m)})/\tau)},$$ where $h_{k,s}^{(m)}$ and $h_{k,o}^{(m)}$ are determined the same way as $h_{k,y}^{(m)}$ in Eq. (1). See details in Appendix D. With the individual $f_s$ and $f_o$, it is safe to have $p(y|v) = p(s|v) \cdot p(o|v)$ that induces $p(y|v) \propto \exp((h_s + h_o)/\tau)$. Therefore, the recomposed logit matrix $H^{(rc)} \in \mathbb{R}^{|S| \times |O|}$ is a Cartesian sum between $h^{(s)} \in \mathbb{R}^{|S|}$ and $h^{(o)} \in \mathbb{R}^{|O|}$, i.e., $H^{(rc)} = h^{(s)} \oplus h^{(o)T}$, where $h^{(s)}$ contains all state logits and $h^{(o)}$ contains all object logits. See the red and blue squares in Fig. (4), respectively. Stochastic Logit Mixup Given the recomposed logit $h^{(rc)}_y \in H^{(rc)}$ and the directly learned compositional logit $h_y$, we propose a stochastic logit mixup (SLM) method for decision fusion by sampling a coefficient $\lambda$ from a Beta prior distribution: $$\tilde{h}_y = (1 - \lambda)h_y + \lambda h^{(rc)}_y, \quad \lambda \sim \text{Beta}(a, b),$$ where $(a, b)$ are hyperparameters indicating the prior preference for each decision. In training, we replace the $h_y$ and $h_k$ of Eq. (1) with the mixed logit $\tilde{h}_y$ and $\tilde{h}_k$, respectively. In testing, we use the expectation of the Beta distribution which is $a/(a + b)$. The insights behind the SLM are that the Beta distribution indicates a prior to $h_y$ or $h^{(rc)}_y$. It provides the flexibility of which compositional decision to trust in, and the stochasticity of the coefficient $\lambda$ inherently introduces a regularization effect in training (Carratino et al., 2022). Moreover, compared to softmax probability mixup (Huang et al., 2023), our logit mixup avoids the limitation of softmax normalization over a huge number of compositional classes, that rich information of class relationship is lost after softmax normalization according to (Bang et al., 2022). Such class relationships are even more important in the CZSL problem as indicated in (Naeem et al., 2021). | Method | MIT-States | UT-Zappos | C-GQA | |-----------------|------------|-----------|-------| | | S | U | H | AUC | S | U | H | AUC | S | U | H | AUC | | Closed | | | | | | | | | | | | | | CLIP (Radford et al., 2021) | 30.2 | 46.0 | 26.1 | 11.0 | 15.8 | 49.1 | 15.6 | 5.0 | 7.5 | 25.0 | 8.6 | 1.4 | | CoOp (Zhou et al., 2022b) | 34.4 | 47.6 | 29.8 | 13.5 | 52.1 | 49.3 | 34.6 | 18.8 | 20.5 | 26.8 | 17.1 | 4.4 | | ProDA\(^1\) (Lu et al., 2022) | 37.4 | 51.7 | 32.7 | 16.1 | 63.7 | 60.7 | 47.6 | 32.7 | — | — | — | — | | CSP (Nayak et al., 2023) | 46.6 | 49.9 | 36.3 | 19.4 | 64.2 | 66.2 | 46.6 | 33.0 | 28.8 | 26.8 | 20.5 | 6.2 | | PCVL (Xu et al., 2022) | 48.5 | 47.2 | 35.3 | 18.3 | 64.4 | 64.0 | 46.1 | 32.2 | — | — | — | — | | HPL (Wang et al., 2023) | 47.5 | 50.6 | 37.3 | 20.2 | 63.0 | 68.8 | 48.2 | 35.0 | 30.8 | 28.4 | 22.4 | 7.2 | | DFSP (Lu et al., 2023) | 46.9 | 52.0 | 37.3 | 20.6 | 66.7 | 71.7 | 47.2 | 36.0 | 38.2 | 32.0 | 27.1 | 10.5 | | **PLID** | **49.7** | **52.4** | **39.0** | **22.1** | **67.3** | **68.8** | **52.4** | **38.7** | **38.8** | **33.0** | **27.9** | **11.0** | | Open | | | | | | | | | | | | | | CLIP (Radford et al., 2021) | 30.1 | 14.3 | 12.8 | 3.0 | 15.7 | 20.6 | 11.2 | 2.2 | 7.5 | 4.6 | 4.0 | 0.3 | | CoOp (Zhou et al., 2022b) | 34.6 | 9.3 | 12.3 | 2.8 | 52.1 | 31.5 | 28.9 | 13.2 | 21.0 | 4.6 | 5.5 | 0.7 | | ProDA\(^1\) (Lu et al., 2022) | 37.5 | 18.3 | 17.3 | 5.1 | 63.9 | 34.6 | 34.3 | 18.4 | — | — | — | — | | CSP (Nayak et al., 2023) | 46.3 | 15.7 | 17.4 | 5.7 | 64.1 | 44.1 | 38.9 | 22.7 | 28.7 | 5.2 | 6.9 | 1.2 | | PCVL (Xu et al., 2022) | 48.5 | 16.0 | 17.7 | 6.1 | 64.6 | 44.0 | 37.1 | 21.6 | — | — | — | — | | HPL (Wang et al., 2023) | 46.4 | **18.9** | 19.8 | 6.9 | 63.4 | 48.1 | 40.2 | 24.6 | 30.1 | 5.8 | 7.5 | 1.4 | | DFSP (Lu et al., 2023) | 47.5 | 18.5 | 19.3 | 6.8 | 66.8 | **60.0** | **44.0** | **30.3** | 38.3 | 7.2 | 10.4 | 2.4 | | **PLID** | **49.1** | **18.7** | **20.4** | **7.3** | **67.6** | **55.5** | **46.6** | **30.8** | **39.1** | **7.5** | **10.6** | **2.5** | Table 1: CZSL results of Closed- and Open-World settings on three datasets. Baseline results are from published literature, where the PCVL was not evaluated on the C-GQA dataset such that we use “—” instead. ## 5 EXPERIMENTS ### Datasets and Evaluation We perform experiments on three CZSL datasets, i.e., MIT-States (Isola et al., 2015), UT-Zappos (Yu & Grauman, 2014), and C-GQA (Naeem et al., 2021), following the standard splitting protocols in CZSL literature (Purushwalkam et al., 2019; Nayak et al., 2023; Lu et al., 2023). See dataset details in the Appendix E. We report the metrics in both closed-world (CW) and open-world (OW) settings, including the best seen accuracy (S), the best unseen accuracy (U), the best harmonic mean (H) between the seen and unseen accuracy, and the area under the curve (AUC) of unseen versus seen accuracy. For OW evaluation, following the CSP (Nayak et al., 2023), we adopt the feasibility calibration by GloVe (Pennington et al., 2014) to filter out infeasible compositions. ### Implementation Details We implement the **PLID** based on the CSP codebase in PyTorch. The CLIP architecture ViT-L/14 is used by default. Without mentioning, we generate \(M = 64\) texts and augment an image with \(N = 8\) views, and adopt Beta(1, 9) as prior. The dropout rates of TFE and VFE are set at 0.5. We use a single NVIDIA 6000Ada GPU for training and testing. Following (Lu et al., 2023), we use Adam optimizer with base learning rate 5e-5, and steply decay it with the factor of 0.5 every 5 training epochs for a total of 20 epochs. Other details are in the Appendix E. ### 5.1 Main Results The results are reported in Table 1. We compare with the CZSL baselines that are developed on the same frozen CLIP model. The table shows that under both the closed-world and open-world test settings, our proposed **PLID** method achieves the best performance in most metrics on the three datasets. Note that ProDA (Lu et al., 2022) also formulates the class-wise Gaussian distributions to address the intra-class diversity, but it can only outperform CLIP and CoOp on all metrics. This indicates the importance of both diversity and informativeness for the CZSL task. On the UT-Zappos dataset, the **PLID** outperforms the DFSP in terms of S, H, and AUC by 0.6%, 5.2%, and 2.7% respectively, while inferior to the DFSP on the best unseen metric. The potential reason is that DFSP fuses the text features into the image images, which better preserves the generalizability of CLIP for the small downstream UT-Zappos dataset. Note that the HPL method uses prompt learning and recognition at both compositional and primitive levels, but it performs only slightly better than CSP and way worse than our method, indicating that traditional prompt learning helps but is not enough to adapt the CLIP model to the CZSL task. --- 1 ProDA is re-implemented since it was originally for zero-shot learning. Limited by the GPU memory, ProDA is not applicable to the C-GQA dataset which consists of more than 278K compositional classes. | LID | TFE | VFE | OPT | VLPD | SLM | $H_{cw}$ | $AUC_{cw}$ | $H_{ow}$ | $AUC_{ow}$ | |-----|-----|-----|-----|------|-----|--------|----------|--------|----------| | (a) | | | | | | 35.41 | 18.56 | 17.37 | 5.56 | | (b) | ✓ | | | | | 37.06 | 20.43 | 18.65 | 6.50 | | (c) | ✓ | ✓ | | | | 37.76 | 21.07 | 19.05 | 6.62 | | (d) | ✓ | ✓ | ✓ | | | 37.87 | 21.09 | 19.70 | 6.95 | | (e) | ✓ | ✓ | ✓ | ✓ | | 38.80 | 21.67 | 19.61 | 7.01 | | (f) | ✓ | ✓ | ✓ | ✓ | ✓ | 38.42 | 21.69 | 20.24 | 7.31 | | (g) | ✓ | ✓ | ✓ | ✓ | ✓ | 38.97 | 22.12 | 20.41 | 7.34 | Table 2: **Ablation study.** (a): the baseline that uses mean pooling of text embeddings from T5-generated sentences. (b): add distribution modeling. (c): change the mean pooling to the cross-attention. (d): augment images followed by cross-attention aggregation. (e): change T5-base LLM to the OPT-1.3B. (f): add VLPD followed by the fixed logit fusion. (g): change the fusion to a stochastic manner, which reaches to our full PLIID. | LLM | MIT-States | UT-Zappos | C-GQA | |-----|------------|-----------|-------| | | $H_{cw}$ | $AUC_{cw}$ | $H_{ow}$ | $AUC_{ow}$ | $H_{cw}$ | $AUC_{cw}$ | $H_{ow}$ | $AUC_{ow}$ | $H_{cw}$ | $AUC_{cw}$ | $H_{ow}$ | $AUC_{ow}$ | | T5 | 38.41 | 21.53 | 20.46 | 7.34 | 54.76 | 40.18 | 44.18 | 28.47 | 26.94 | 10.65 | 9.77 | 2.35 | | OPT | 38.97 | 22.12 | 20.41 | 7.34 | 52.38 | 38.67 | 46.61 | 30.84 | 27.87 | 11.04 | 10.55 | 2.54 | Table 3: Effect of LLMs on three CZSL datasets. ### 5.2 Model Analysis #### Ablation Study In Table 2, we show the contribution of the major components in the PLIID model. It is clear that all components are beneficial. Here we highlight some important observations: (1) Our LID method significantly improves the performance compared to the baseline (a) and is much better than ProDA (20.43% vs 16.1% of $AUC_{cw}$) when referring to Table 1. This implies that modeling the distribution by way of ProDA is not sufficient, but language informativeness is critical and preferred for the CZSL task. (2) Rows (c)(d)(e) show that TFE, VFE, and OPT-1.3B can further achieve some performance gains. (3) Rows (f)(g) show that VLPD benefits more in the open-world setting while the SLM contributes more in the closed-world setting. #### Effect of LLM In Table 3, we analyze the choice of LLMs by comparing PLIID using the pre-trained T5 (Raffel et al., 2020a) and OPT (Zhang et al., 2022a). It shows the performance varies across CZSL datasets. Note that the quality of the generated texts by OPT is much better than T5 (see examples in Appendix B), the results imply that the higher text quality on the large C-GQA dataset leads to better CZSL performance. Besides, on the UT-Zappos dataset, the better OPT does not show better closed-world performance. The reason could be that UT-Zappos is too small and its commercial shoe images do not exhibit diverse visual backgrounds. #### Effect of LID In Table 4, we further investigate at which semantic level the language-informed distribution (LID) should be applied. Denote the Gaussian distribution on state, object, and composition as $\mathcal{N}_s$, $\mathcal{N}_o$, and $\mathcal{N}_y$, respectively. The Table 4 results clearly show the superiority of applying LID on all three semantic levels. This indicates the generality of language-informed distribution towards many potential zero-shot or open-vocabulary recognition problems. #### Design Choice of VLPD In Table 5, we validate the design choices of VLPD, including the model without primitive decomposition, only decompose text into primitives, and our decomposition on both visual and language primitives (VLPD). The results show the clear advantage of our VLPD design choice. Note that DFSP also has primitive decomposition but only on text modality. Our better performance thus indicates the need for decomposition on both visual and image. | text | image | $H_{cw}$ | $AUC_{cw}$ | $H_{ow}$ | $AUC_{ow}$ | |------|-------|--------|----------|--------|----------| | | | 37.94 | 20.98 | 19.67 | 6.98 | | ✓ | | 38.40 | 21.31 | 19.99 | 7.13 | | ✓ | ✓ | 38.97 | 22.12 | 20.41 | 7.34 | Table 5: Effect of VLPD. The three rows indicate no decomposition, decompose text-only, and decompose both (full VLPD). Hyperparameters In Fig. 5, we quantitatively show the impact of the number of generated text descriptions $M$ and the number of augmented image views $N$. It shows that the best performance is achieved when $M = 64$ and $N = 8$. We note that more augmented image views slightly decrease the performance, which could be attributed to the overfitting of the seen compositions. In Fig. 6, we show the impact of the Beta prior parameters $(a, b)$. We set them to $(1, 1)$ for random sampling, $(1, 9)$ for preference to the composition, $(9, 1)$ for preference to re-composition, and $(5, 5)$ for equal preference, respectively. It reveals that trusting more of the directly learned composition by $\text{Beta}(1, 9)$ achieves the best results. Qualitative Analysis We use the tSNE to visualize the generated text embeddings $D$ and the learned DSP from or $\text{PLID}$ model in Fig. 7, where the same set of 10 compositional classes are randomly selected from MIT-States dataset. It shows that by learning the distribution of each composition from LLM-generated texts using Eq. (1) and (3) and TFE module, compositional class embeddings can be distributed more compactly in each class (small intra-class variance), and better separated among multiple classes (large inter-class distance). In Appendix F, we show primitive-level tSNE embedding visualizations that reveal the same observation. In Fig. 8, we show some success and failure cases of our $\text{PLID}$ model. For example, the heavy water case indicates an incorrect label while $\text{PLID}$ could correctly predict it as huge wave. This shows the robustness of $\text{PLID}$ against noisy labels. The last two failure cases reveal $\text{PLID}$ still could make mistakes on the state prediction (cooked pasta) and object prediction (engraved floor), which indicates there is still a long way to go for the CZSL problem. 6 CONCLUSION In this work, we propose a novel CLIP-based compositional zero-shot learning (CZSL) method named $\text{PLID}$. It leverages the generated text description of each class from large language models to formulate the class-specific Gaussian distributions. By softly prompting these language-informed distributions, $\text{PLID}$ could achieve diversified and informative class embeddings for fine-grained compositional classes. Besides, we decompose the visual embeddings of image data into simple primitives that contain the basic states and objects, from which the re-composed predictions are derived to calibrate the prediction by our proposed stochastic logit mixup strategy. Experimental results show the superiority of the $\text{PLID}$ method to prior arts on all common CZSL datasets. REFERENCES Yuval Atzmon, Felix Kreuk, Uri Shalit, and Gal Chechik. A causal view of compositional zero-shot recognition. In NeurIPS, 2020. Duhyeon Bang, Kyungjune Baek, Jiwoo Kim, Yunho Jeon, Jin-Hwa Kim, Jiwon Kim, Jongwuk Lee, and Hyunjung Shim. Logit mixing training for more reliable and accurate prediction. In IJCAI, 2022. Luigi Carratino, Moustapha Cissé, Rodolphe Jenatton, and Jean-Philippe Vert. On mixup regularization. JMLR, 23(325), 2022. Mohammad Mahdi Derakhshani, Enrique Sanchez, Adrian Bulat, Victor Guilherme Turrisi da Costa, Cees G. M. Snoek, Georgios Tzimiropoulos, and Braïs Martinez. Bayesian prompt learning for image-language model generalization. In ICCV, 2023. Ruifei He, Shuyang Sun, Xin Yu, Chuhui Xue, Wenqing Zhang, Philip Torr, Song Bai, and Xiaojuan Qi. Is synthetic data from generative models ready for image recognition? In ICLR, 2023. Siteng Huang, Biao Gong, Yutong Feng, Yiliang Lv, and Donglin Wang. Troika: Multi-path cross-modal traction for compositional zero-shot learning. arXiv preprint arXiv:2303.15230, 2023. Dat Huynh and Ehsan Elhamifar. Compositional zero-shot learning via fine-grained dense feature composition. In NeurIPS, 2020. Phillip Isola, Joseph J Lim, and Edward H Adelson. Discovering states and transformations in image collections. In CVPR, 2015. Shyamgopal Karthik, Massimiliano Mancini, and Zeynep Akata. Kg-sp: Knowledge guided simple primitives for open world compositional zero-shot learning. In CVPR, 2022. Muhammad Uzair Khattak, Hanoona Rasheed, Muhammad Maaz, Salman Khan, and Fahad Shahbaz Khan. Maple: Multi-modal prompt learning. In CVPR, 2023. Hyeongjun Kwon, Taeyong Song, Somi Jeong, Jin Kim, Jinhyun Jang, and Kwanghoon Sohn. Probabilistic prompt learning for dense prediction. In CVPR, 2023. Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. Behavioral and brain sciences, 40, 2017. Martha Lewis, Qinan Yu, Jack Merullo, and Ellie Pavlick. Does clip bind concepts? probing compositionality in large image models. arXiv preprint arXiv:2212.10537, 2022. Xiangyu Li, Xu Yang, Kun Wei, Cheng Deng, and Muli Yang. Siamese contrastive embedding network for compositional zero-shot learning. In CVPR, 2022. Yong-Lu Li, Yue Xu, Xiaohan Mao, and Cewu Lu. Symmetry and group in attribute-object compositions. In CVPR, 2020. Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, and Xiang Ren. CommonGen: A constrained text generation challenge for generative commonsense reasoning. In EMNLP, 2020. Xinyang Liu, Dongsheng Wang, Miaoge Li, Zhibin Duan, Yishi Xu, Bo Chen, and Mingyuan Zhou. Patch-token aligned bayesian prompt learning for vision-language models. arXiv preprint arXiv:2303.09100, 2023. Zhe Liu, Yun Li, Lina Yao, Xiaojun Chang, Wei Fang, Xiaojun Wu, and Yi Yang. Simple primitives with feasibility-and contextuality-dependence for open-world compositional zero-shot learning. arXiv preprint arXiv:2211.02895, 2022. Cewu Lu, Ranjay Krishna, Michael Bernstein, and Li Fei-Fei. Visual relationship detection with language priors. In ECCV, 2016.
S7j1sNVIm9
The theoretical analysis to the heterogeneity is not convincing. $\sigma_f^2$ is used as a measure of client heterogeneity in the paper, however, it is just an upper-bound (Proposition 1) of some more classical measure of heterogeneity, which means the proposed measure is weaker. In fact, if $l^*$ is chosen to be 0 (as in the paper), this measure is irrelevant to the heterogeneity.
Locally Adaptive Federated Learning Anonymous authors Paper under double-blind review Abstract Federated learning is a paradigm of distributed machine learning in which multiple clients coordinate with a central server to learn a model, without sharing their own training data. Standard federated optimization methods such as Federated Averaging (FedAvg) ensure balance among the clients by using the same stepsize for local updates on all clients. However, this means that all clients need to respect the global geometry of the function which could yield slow convergence. In this work, we propose locally adaptive federated learning algorithms, that leverage the local geometric information for each client function. We show that such locally adaptive methods with uncoordinated stepsizes across all clients can be particularly efficient in interpolated (overparameterized) settings, and analyze their convergence in the presence of heterogeneous data for convex and strongly convex settings. We validate our theoretical claims by performing illustrative experiments for both i.i.d. non-i.i.d. cases. Our proposed algorithms match the optimization performance of tuned FedAvg in the convex setting, outperform FedAvg as well as state-of-the-art adaptive federated algorithms like FedAMS for non-convex experiments, and come with superior generalization performance. 1 Introduction Federated Learning (FL) [Kairouz et al., 2021] has become popular as a collaborative learning paradigm where multiple clients jointly train a machine learning model without sharing their local data. Despite the recent success of FL, state-of-the-art federated optimization methods like FedAvg [McMahan et al., 2017] still face various challenges in practical scenarios such as not being able to adapt according to the training dynamics—FedAvg using vanilla SGD updates with constant stepsizes maybe unsuitable for heavy-tail stochastic gradient noise distributions, arising frequently in training large-scale models such as ViT [Dosovitskiy et al., 2021]. Such settings benefit from adaptive stepsizes, which use some optimization statistics (e.g., loss history, gradient norm). In the centralized setting, adaptive methods such as Adam [Kingma & Ba, 2014] and AdaGrad [Duchi et al., 2011] have succeeded in obtaining superior empirical performance over SGD for various machine learning tasks. However, extending adaptive methods to the federated setting remains a challenging task, and majority of the recently proposed adaptive federated methods such as FedAdam [Reddi et al., 2021] and FedAMS [Wang et al., 2022a] consider only server-side adaptivity, i.e., essentially adaptivity only in the aggregation step. Some methods like Local-AMSGrad [Chen et al., 2020] and Local-AdaAlter [Xie et al., 2019] do consider local (client-side) adaptivity, but they perform some form of stepsize aggregation in the communication round, thereby using the same stepsize on all clients. Using the same stepsize for all clients, that needs to respect the geometry of the global function, can yield sub-optimal convergence. To harness the full power of adaptivity for federated optimization, we argue that it makes sense to use fully locally adaptive stepsizes on each client to capture the local geometric information of each objective function [Wang et al., 2021], thereby leading to faster convergence. However, such a change is non trivial, as federated optimization with uncoordinated stepsizes on different clients might not converge. The analysis of locally adaptive methods necessitates developing new proof techniques extending the existing error-feedback framework [Stich, 2018] for federated optimization algorithms (that originally works for equal stepsizes) to work for fully un-coordinated local (client) stepsizes. In this work, we provide affirmative answers to the following open questions: (a) Can local adaptivity for federated optimization be useful (faster convergence)? (b) Can we design such a locally adaptive federated optimization algorithm that provably converges? To answer (a), we shall see a concrete case in Example 1 along with an illustration in Figure 1, showing locally adaptive stepsizes can substantially speed up convergence. For designing a fully locally adaptive method for federated optimization, we need an adaptive stepsize that would be optimal for each client function. Inspired by the Polyak stepsize \cite{Polyak1987}, which is designed for gradient descent on convex functions, Loizou et al. \cite{Loizou2021} recently proposed the stochastic Polyak step-size (SPS) for SGD. SPS comes with strong convergence guarantees, and needs less tuning compared to other adaptive methods like Adam and AdaGrad. We propose the FedSPS algorithm by incorporating the SPS stepsize in the local client updates. We obtain exact convergence of our locally adaptive FedSPS when interpolation condition is satisfied (overparameterized case common in deep learning problems), and convergence to a neighbourhood for the general case. Reminiscing the fact that Li et al. \cite{Li2020} showed FedAvg needs decaying stepsizes to converge under heterogeneity, we extend our method to a decreasing stepsize version FedDecSPS (following ideas from DecSPS \cite{Orvieto2022}), that provides exact convergence in practice, for the general non-interpolating setting without the aforementioned small stepsize assumption. Finally, we experimentally observe that the optimization performance of FedSPS is always on par or better than that of tuned FedAvg and FedAMS, and FedDecSPS is particularly efficient in non-interpolating settings. Contributions. We summarize our contributions as follows: • We show that local adaptivity can lead to substantially faster convergence. We design the first fully locally adaptive method for federated learning called FedSPS, and prove sublinear and linear convergence to the optimum, for convex and strongly convex cases, respectively, under interpolation (Theorem 3). This is in contrast to existing adaptive federated methods such as FedAdam and FedAMS, both of which employ adaptivity only for server aggregation. • For real-world FL scenarios (such as when the interpolation condition is not satisfied due to client heterogeneity), we propose a practically motivated algorithm FedDecSPS that enjoys local adaptivity and exact convergence also in the non-interpolating regime due to decreasing stepsizes. • We empirically verify our theoretical claims by performing relevant illustrative experiments to show that our method requires less tuning compared to state-of-the-art algorithms which need extensive grid search. We also obtain competitive performance (both optimization as well as generalization) of the proposed FedSPS and FedDecSPS compared to tuned FedAvg and FedAMS for the convex and non-convex cases in i.i.d. as well as non-i.i.d. settings. 1.1 Additional related Work Adaptive gradient methods and SPS. Recently, adaptive stepsize methods that use some optimization statistics have become popular for deep learning applications. Such methods, including Adam \cite{Kingma2014} and AdaGrad \cite{Duchi2011}, work well in practice, but their convergence guarantees depend sometimes on unrealistic assumptions \cite{Duchi2011}. An adaptive method with sound theoretical guarantees is the Polyak stepsize \cite{Polyak1987}, which has been recently extended to the stochastic setting by Loizou et al. \cite{Loizou2021} and termed stochastic Polyak stepsize (SPS). Extensions of SPS have been proposed for solving structured non-convex problems \cite{Gower2021a} and in the update rule of stochastic mirror descent \cite{D'Orazio2021}. Further follow-up works have come up with various ways to overcome the limitations of vanilla SPS, such as when optimal stochastic loss values are not known \cite{Orvieto2022,Gower2022}, or when the interpolation condition does not hold \cite{Orvieto2022,Gower2021b}, as well as a proximal variant for tackling regularization terms \cite{Schaipp2023}. Adaptive federated optimization. Reddi et al. \cite{Reddi2021} provide a general framework for adaptive federated optimization (FedOpt), including particular instances such as FedAdam and FedYogi, by using the corresponding centralized adaptive methods as the server optimizer. Several works followed on the idea of server side adaptivity, some recent ones being CD-Adam \cite{Wang2022b} and FedAMS \cite{Wang2022a}. Fully locally adaptive stepsizes on the client side have not been explored before, except in one concurrent work \cite{Kim2023}. Their proposed method is based on estimator for the inverse local Lipschitz constant from \cite{Malitsky2019}, and analyses only the non-convex setting a strong bounded gradient assumption. 2 PROBLEM SETUP In this work, we consider the following sum-structured (cross-silo) federated optimization problem \[ f^* := \min_{x \in \mathbb{R}^d} \left[ f(x) := \frac{1}{n} \sum_{i=1}^{n} f_i(x) \right], \tag{1} \] where the components \( f_i : \mathbb{R}^d \to \mathbb{R} \) are distributed among \( n \) local clients and are given in stochastic form \( f_i(x) := \mathbb{E}_{\xi \sim D_i}[F_i(x, \xi)] \), where \( D_i \) denotes the distribution of \( \xi \) over parameter space \( \Omega_i \) on client \( i \in [n] \). Standard empirical risk minimization is an important special case of this problem, when each \( D_i \) presents a finite number \( m_i \) of elements \( \{\xi_1^i, \ldots, \xi_{m_i}^i\} \). Then \( f_i \) can be rewritten as \( f_i(x) = \frac{1}{m_i} \sum_{j=1}^{m_i} F_i(x, \xi_j^i) \). We do not make any restrictive assumptions on the data distributions \( D_i \), so our analysis covers the case of heterogeneous (non-i.i.d.) data where \( D_i \neq D_j, \forall i \neq j \) and the local minima \( x_i^* := \arg \min_{x \in \mathbb{R}^d} f_i(x) \), can be different from the global minimizer of (1). 3 LOCALLY ADAPTIVE FEDERATED OPTIMIZATION In the following, we provide a background on federated optimization, and the (stochastic) Polyak stepsize. This is followed by an Example to motivate how local adaptivity with (stochastic) Polyak stepsizes can help to improve convergence—especially in the interpolation regime. Finally, we outline our proposed method FedSPS to solve (1). 3.1 BACKGROUND AND MOTIVATION **Federated averaging.** A common approach to solving (1) in the distributed setting is FedAvg (McMahan et al., 2017) also known as Local SGD (Stich, 2018). This involves the clients performing a local step of SGD in each iteration, and the clients communicate with a central server after every \( \tau \) iterations—their iterates are averaged on the server, and sent back to all clients. FedAvg corresponds to the special case of Algorithm 1 with constant stepsizes \( \gamma_t^i \equiv \gamma_0 \) (Line 4). **PS and SPS.** Considering the centralized setting (\( n = 1 \)) of finite-sum optimization on a single worker \( \min_{x \in \mathbb{R}^d} \left[ f_1(x) := \frac{1}{m} \sum_{j=1}^{m} F_1(x, \xi_j^1) \right] \), we introduce the PS as well as the SPS as below: - **Deterministic Polyak stepsize.** The convergence analysis of Gradient Descent (GD) for a convex function \( f_1(x) \) involves the inequality \( \|x_{t+1} - x^*\|^2 \leq \|x_t - x^*\|^2 - 2\gamma_t (f_1(x_t) - f_1(x^*)) + \gamma_t^2 \|\nabla f_1(x_t)\|^2 \), the right-hand side of which is minimized by the PS \( \gamma_t = \frac{f_1(x_t) - f_1(x^*)}{\|\nabla f_1(x_t)\|^2} \). - **Stochastic Polyak stepsize.** We use the notion of SPSmax from the original paper (Loizou et al., 2021). The SPSmax stepsize for SGD (with single stochastic sample) is given by \( \gamma_t = \min \left\{ \frac{F_1(x_t, \xi_t^1) - F_1^*}{c \|\nabla F_1(x_t, \xi_t^1)\|^2}, \gamma_b \right\} \), where \( F_1^* := \inf_{\xi \in D_1, x \in \mathbb{R}^d} F_1(x, \xi) \), \( \gamma_b > 0 \) is an upper bound on the stepsize that controls the size of neighbourhood (\( \gamma_b \) trades-off adaptivity for accuracy), and \( c > 0 \) is a constant scaling factor. Instead of using the optimal function values of each stochastic function as in the original paper, we use the lower bound on the function values \( \ell_1^* \leq F_1^* \), which is easier to obtain for many practical tasks as shown in (Orvieto et al., 2022). **Example 1** (Local adaptivity using Polyak stepsizes can improve convergence). For a parameter \( a > 0 \), consider the finite sum optimization problem \( \min_{x \in \mathbb{R}} \left[ f(x) := \frac{1}{2} \sum_{i=1}^{2} f_i(x) \right] \), with \( f_1(x) = \frac{a}{2} x^2 \), \( f_2(x) = \frac{1}{2} x^2 \) in the interpolation regime. If we solve this problem using mini-batch GD, \( x_{t+1} = x_t - \frac{1}{L} (\nabla f_1(x_t) + \nabla f_2(x_t)) \), we are required to choose a stepsize \( \gamma \leq 2/L \), where \( L = \frac{1+a}{2} \) to enable convergence, and therefore \( \Omega(a \log \frac{1}{\epsilon}) \) steps are needed. However, if we solve the same problem using locally adaptive distributed GD of the form \( x_{t+1} = x_t - \frac{1}{2} (\gamma_1 \nabla f_1(x_t) + \gamma_2 \nabla f_2(x_t)) \), then the complexity can be near-constant. Concretely, for any stepsizes \( \gamma_i \in [\frac{1}{2} \gamma_1^*, \frac{3}{2} \gamma_1^*] \), with \( \gamma_1^* = \frac{1}{2} \), \( \gamma_2^* = 1 \) (which also includes the Polyak stepsizes corresponding to functions \( f_1 \) and \( f_2 \)), the iteration complexity is \( O(\log \frac{1}{\epsilon}) \), which can be arbitrarily better than \( \Omega(a \log \frac{1}{\epsilon}) \) when \( a \to \infty \). Note that the observation made in this example can also be extended to the stochastic case of SGD with SPS, as illustrated in Figure 1. Figure 1: Illustration for Example 1 showing local adaptivity can improve convergence. We run SGD with constant, global SPS, and locally adaptive SPS stepsizes (with \( c = 0.5, \gamma_b = 1.0 \)), for functions \( f_1(x) = x^2 \), \( f_2(x) = \frac{1}{2}x^2 \), where stochastic noise was simulated by adding Gaussian noise with mean 0, and standard deviation 10 to the gradients. 3.2 Proposed Method Motivated by the previous example on the benefit of local adaptivity, we now turn to design such a locally adaptive federated optimization algorithm with provable convergence guarantees. As stated before, we need an adaptive stepsize that is optimal for each client function, and we choose the SPS stepsize for this purpose. In the following, we describe our proposed method FedSPS. FedSPS. We propose a fully locally (i.e., client-side) adaptive federated optimization algorithm FedSPS (Algorithm 1) with asynchronous stepsizes, i.e., the stepsizes are different across the clients, and also across the local steps for a particular client. The FedSPS stepsize for a client \( i \) and local iteration \( t \) will be given by \[ \gamma_i^t = \min \left\{ \frac{F_i(x_i^t, \xi_i^t) - \ell_i^*}{c \| \nabla F_i(x_i^t, \xi_i^t) \|}, \gamma_b \right\}, \] where \( c, \gamma_b > 0 \) are constants as explained before, \( \xi_i^t \) is the sample at time \( t \) on worker \( i \), \( F_i(x_i^t, \xi_i^t) \) is the stochastic loss, \( g_i^t := \nabla F_i(x_i^t, \xi_i^t) \) is the stochastic gradient, and \( \ell_i^* \leq F_i^* = \inf_{\xi_i \in D_i, x \in \mathbb{R}^d} F_i(x, \xi_i) \) is a lower bound on the minima of all functions on worker \( i \). Since the loss functions are non-negative for most practical machine learning tasks, we can use \( \ell_i^* = 0 \) as discussed before, for running our algorithms. We analyse FedSPS in the strongly convex and convex settings and prove convergence guarantees (Theorem 2). We would like to stress that \( \gamma_b \) and \( c \) are free hyperparameters (in the sense that they do not depend theoretically on any problem-dependent parameters) and we demonstrate that they require minimal tuning through relevant sensitivity analysis in Section 6. This is an advantage over the learning rate parameter of FedAvg and FedAMS which theoretically depend on \( L \), and require extensive grid search in practice (Section F.4). The notations used throughout can be extended to the mini-batch setting as described in Appendix C.1. Remark 2 (Alternative design choices). Note that there can be various alternative design choices for incorporating SPS in FedAvg. We tried some variants such as FedSPS-Normalized using client and server side correction to account for solution bias (Wang et al., 2021) due to asynchronous stepsizes (Appendix D.1). We also introduce FedSPS-Global in the Appendix D.2, that uses aggregation of stepsizes in communication rounds, similar to (Chen et al., 2020; Xie et al., 2019). However, none of these other design choices provided any practical advantages over our proposed FedSPS. 4 Convergence Analysis of FedSPS 4.1 Assumptions on the Objective Function and Noise Assumption 1 (\( L \)-smoothness). Each function \( F_i(x, \xi) : \mathbb{R}^d \times \Omega_i \to \mathbb{R}, i \in [n] \) is differentiable for each \( \xi \in \text{supp}(D_i) \) and there exists a constant \( L \geq 0 \) such that for each \( x, y \in \mathbb{R}^d, \xi \in \text{supp}(D_i) \): \[ \| \nabla F_i(y, \xi) - \nabla F_i(x, \xi) \| \leq L \| x - y \|. \] Note that Assumption 1 implies \( L \)-smoothness of each \( f_i(x) \) and of \( f(x) \). The assumption of each \( F_i(x, \xi) \) being smooth is often used in federated and decentralized optimization literature (for e.g., (Koloskova et al., 2020, Assumption 1a), or (Koloskova et al., 2022, Assumption 3)). Algorithm 1 FedSPS: Federated averaging with fully locally adaptive stepsizes. Input: \( x_0^i = x_0, \forall i \in [n] \) 1: for \( t = 0, 1, \cdots, T - 1 \) do 2: for each client \( i = 1, \cdots, n \) in parallel do 3: sample \( \xi_t^i \), compute \( g_t^i := \nabla F_i(x_t^i, \xi_t^i) \) 4: FedSPS: \( \gamma_t^i = \min \left\{ \frac{F_i(x_t^i, \xi_t^i) - \ell_t^i}{c \| g_t^i \|^2}, \gamma_b \right\} \) ▷ local stochastic Polyak stepsize 5: if \( t + 1 \) is a multiple of \( \tau \) then 6: \( x_{t+1}^i = \frac{1}{n} \sum_{i=1}^{n} (x_t^i - \gamma_t^i g_t^i) \) ▷ communication round 7: else 8: \( x_{t+1}^i = x_t^i - \gamma_t^i g_t^i \) ▷ local step 9: end if 10: end for 11: end for Assumption 2 (\( \mu \)-convexity). There exists a constant \( \mu \geq 0 \) such that for each \( i \in [n] \), \( \xi \in \text{supp}(D_i) \) and for all \( x, y \in \mathbb{R}^d \), \[ F_i(y, \xi) \geq F_i(x, \xi) + \langle \nabla F_i(x, \xi), y - x \rangle + \frac{\mu}{2} \| y - x \|^2 . \] For some of our results, we assume \( \mu \)-strong convexity for a parameter \( \mu > 0 \), or convexity (when \( \mu = 0 \)). Furthermore we assume (as mentioned in the introduction) access to stochastic functions \( F_i(x, \xi) \) on each client \( i \), with \( E_{\xi \sim D_i} \nabla F_i(x, \xi) = \nabla f_i(x) \), \( E_{\xi \sim D_i} F_i(x, \xi) = f_i(x) \). Finite optimal objective difference. For each \( i \in [n] \) we denote \( f_i^* := \inf_{x \in \mathbb{R}^d} f_i(x) \). Recall that we defined \( F_i^* := \inf_{\xi \sim D_i, x \in \mathbb{R}^d} F_i(x, \xi) \), and need knowledge of lower bounds, \( \ell_i^* \leq F_i^* \) for our algorithm. We define the quantity \[ \sigma_f^2 := \frac{1}{n} \sum_{i=1}^{n} (f_i(x^*) - \ell_i^*) = f^* - \frac{1}{n} \sum_{i=1}^{n} \ell_i^* , \] that will appear in our complexity estimates, and thus we implicitly assume that \( \sigma_f^2 < \infty \) (finite optimal objective difference). Moreover, \( \sigma_f^2 \) also acts as our measure of heterogeneity between clients. This is in line with previous works on federated optimization in non-i.i.d. setting, such as Li et al. (2020) that used \( \Gamma := f^* - E f_i^* \) as the heterogeneity measure. We can relate \( \sigma_f^2 \) to the more standard measures of function heterogeneity \( \zeta^2 = \frac{1}{n} \sum_{i=1}^{n} \| \nabla f_i(x) - \nabla f(x) \|^2 \) and gradient variance \( \sigma^2 = \frac{1}{n} \sum_{i=1}^{n} \mathbb{E}_{\xi_i} \| \nabla F_i(x, \xi) - \nabla f_i(x) \|^2 \) in the federated literature (Koloskova et al., 2022; Wang et al., 2022a), as shown in the following proposition (proof in Appendix C.2). For the case of convex functions, it suffices (Koloskova et al., 2020) to compare with \( \zeta^2 = \frac{1}{n} \sum_{i=1}^{n} \| \nabla f_i(x^*) \|^2 \), \( \sigma^2 = \frac{1}{n} \sum_{i=1}^{n} \mathbb{E}_{\xi_i} \| \nabla F_i(x^*, \xi) - \nabla f_i(x^*) \|^2 \), calculated at the global optimum \( x^* = \arg \min_{x \in \mathbb{R}^d} f(x) \). We can observe that \( \sigma_f^2 \) is actually a stronger assumption than bounded noise at optimum \( (\zeta, \sigma) \), but weaker than uniformly bounded noise \( (\zeta, \sigma) \). Proposition 1 (Comparison of heterogeneity measures). Using the definitions of \( \sigma_f^2, \zeta^2, \) and \( \sigma^2 \) as defined above, we have: (a) \( \zeta^2 \leq 2L\sigma_f^2 \), and (b) \( \sigma^2 \leq 2L\sigma_f^2 \). 4.2 Convergence of fully locally adaptive FedSPS In this section we provide the convergence guarantees of FedSPS on sums of convex (or strongly convex) functions. We do not make any restriction on \( \gamma_b \), and thus denote this as the fully locally adaptive setting that is of most interest to us. The primary theoretical challenge is extending the error-feedback framework (that originally works for equal stepsizes) (Mania et al., 2017; Stich & Karimireddy, 2020) to work for fully un-coordinated local stepsizes, and we do this for the first time in our work. All proofs are provided in Appendix B. Theorem 3 (Convergence of FedSPS). Assume that Assumptions 1 and 2 hold and \( c \geq 2\tau^2 \), then after \( T \) iterations (\( T/\tau \) communication rounds) of FedSPS (Algorithm 1) it holds (a) Convex case: \[ \frac{1}{Tn} \sum_{t=0}^{T-1} \sum_{i=1}^{n} \mathbb{E}[f_i(x_t^i) - f_i^*] \leq \frac{2}{T\alpha} \|x_0 - x^*\|^2 + \frac{4\gamma_b \sigma_f^2}{\alpha}, \] where \( \alpha := \min \left\{ \frac{1}{2cL}, \gamma_b \right\} \). If \( \mu > 0 \), and \( c \geq 4\tau^2 \), we have (b) Strongly convex case: \[ \mathbb{E} \|x_T - x^*\|^2 \leq A(1 - \mu\alpha)^T \|x_0 - x^*\|^2 + \frac{2\gamma_b \sigma_f^2}{\alpha \mu}, \] where \( A = \frac{1}{\mu\alpha} \), and \( \bar{x}_t := \frac{1}{n} \sum_{i=1}^{n} x_t^i \). The convergence criterion of the first result (6) is non-standard, as it involves the average of all iterates \( x_t^i \) on the left hand side, and not the average \( \bar{x}_t \) more commonly used. However, note that every \( T \)-th iteration these quantities are the same, and thus our result implies convergence of \[ \frac{1}{Tn} \sum_{t=0}^{(T-1)/T} \sum_{i=1}^{n} \mathbb{E}[f_i(\bar{x}_{t,T}) - f_i^*]. \] Moreover, in the interpolation case all \( f_i^* \equiv f^* \). Remark 4 (Minimal need for hyperparameter tuning). The parameter \( \tau \) is a user selected input parameter to determine the number of local steps, and \( \gamma_b \) trades-off adaptivity (potentially faster convergence for large \( \gamma_b \)) and accuracy (higher for small \( \gamma_b \)). Moreover, as \( c \) only depends on the input parameter \( \tau \) and not on properties of the function (e.g. \( L \) or \( \mu \)), it is also a free parameter. The algorithm provably converges (up to the indicated accuracy) for any choice of these parameters. The lower bounds \( f_i^* \) can be set to zero for many machine learning problems as discussed before. Therefore, we effectively reduce the need for hyperparameter tuning compared to previous methods like FedAMS whose convergence depended on problem-dependent parameters. Comparison with SPS (Loizou et al., 2021). We will now compare our results to Loizou et al. (2021) that studied SPS for a single worker (\( n = 1 \)). First, we note that in the strongly convex case, we almost recover Theorem 3.1 in Loizou et al. (2021). The only differences are that they have \( A = 1 \) and allow weaker bounds on the parameter \( c \) (\( c \geq 1/2 \), vs. our \( c > 4 \)), but we match other constants. In the convex case, we again recover (Loizou et al., 2021, Theorem 3.4) up to constants and the stronger condition on \( c \) (vs. \( c > 1 \) in their case). • (Special case I) Exact convergence of FedSPS in interpolation regime: We highlight the linear convergence of FedSPS in the interpolation case (\( \sigma_f = 0 \)) in the following corollary. Corollary 5 (Linear Convergence of FedSPS under Interpolation). Assume interpolation, \( \sigma_f^2 = 0 \) and let the assumptions of Theorem 3 be satisfied with \( \mu > 0 \). Then \[ \mathbb{E} \|x_T - x^*\|^2 \leq A(1 - \mu\alpha)^T \|x_0 - x^*\|^2. \] • (Special case II) Exact convergence of FedSPS in the small stepsize regime: Theorem 3 shows convergence of FedSPS to a neighborhood of the solution. Decreasing \( \gamma_b \) (smaller than \( \frac{1}{2cL} \)) can improve the accuracy, but the error is at least \( \Omega(\sigma_f^2) \) even when \( \gamma_b \to 0 \). This issue is also persistent in the original work on SPS_max (Loizou et al., 2021, Corr. 3.3). However, we remark when the stepsize upper bound \( \gamma_b \) is chosen extremely small—not allowing for adaptivity—then FedSPS becomes identical to constant stepsize FedAvg. This is not reflected in Theorem 3 that cannot recover the exact convergence known for FedAvg. We address this in the next theorem, proving exact convergence of small stepsize FedSPS (equivalent to analysis of FedAvg with \( \sigma_f^2 \) assumption). Theorem 6 (Convergence of small stepsize FedSPS). Assume that Assumptions 1 and 2 hold and \( \gamma_b \leq \min \left\{ \frac{1}{2cL}, \frac{1}{20L\tau} \right\} \), then after \( T \) iterations of FedSPS (Algorithm 1) it holds (a) Convex case: \[ \frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E}[f(\bar{x}_t) - f^*] = O \left( \frac{1}{T\gamma_b} \|x_0 - x^*\|^2 + \gamma_b L \sigma_f^2 + \gamma_b^2 L \tau^2 \sigma_f^2 \right), \] and when \( \mu > 0 \), (b) Strongly convex case: \[ \mathbb{E} \|x_T - x^*\|^2 = O \left( \frac{\|x_0 - x^*\|^2}{\mu \gamma_b} (1 - \mu \gamma_b)^T + \gamma_b \frac{L \sigma_f^2}{\mu} + \gamma_b^2 \frac{L \tau^2 \sigma_f^2}{\mu} \right). \] This theorem shows that by choosing an appropriately small $\gamma_b$, any arbitrary target accuracy $\epsilon > 0$ can be obtained. We are only aware of Li et al. (2020) that studies FedAvg under similar assumptions as us ($\Gamma := f^* - E_i f^*_i$ measuring heterogeneity). However, their analysis additionally required bounded stochastic gradients and their convergence rates are weaker (e.g., not recovering linear convergence under interpolation when $\sigma_f^2 = 0$). 5 Decreasing FedSPS for Exact Convergence In the previous section, we have proved that FedSPS converges in the interpolation setting irrespective of the value of the stepsize parameter $\gamma_t$. However, many practical federated learning scenarios such as those involving heterogeneous clients constitute the non-interpolating setting ($\sigma_f > 0$). Here, we need to choose a small value of $\gamma_b$ to ensure convergence, trading-off adaptivity for achieving exact convergence. We might recall that Li et al. (2020) proved choosing a decaying stepsize is necessary for convergence of FedAvg under heterogeneity. In this section, we draw inspiration from the decreasing SPS stepsize DecSPS (Orvieto et al., 2022), to develop FedDecSPS, that achieves exact convergence in practical non-interpolating scenarios without compromising adaptivity. FedDecSPS. In order to obtain exact convergence to arbitrary accuracy (without the small stepsize assumption) in the heterogeneous setting with $\sigma_f > 0$, we propose a heuristic decreasing stepsize version of FedSPS, called FedDecSPS. The FedDecSPS stepsize for client $i$ and local iteration $t$ is given by $\gamma_t^i = \frac{1}{c_t} \min \left\{ \frac{F_i(x_t^i, \xi_t^i) - \ell_t^*}{\|\nabla F_i(x_t^i, \xi_t^i)\|^2}, c_{t-1} \gamma_{t-1}^i \right\}$, where $(c_t)_{t=0}^\infty$ is any non-decreasing positive sequence of real numbers. We also set $c_{-1} = c_0$, $\gamma_{-1}^i = \gamma_b$. Experiments involving heterogeneous clients in Section 6 demonstrate the practical convergence benefits of FedDecSPS in non-interpolating settings. 6 Experiments Experimental setup. For all federated training experiments we have 500 communication rounds (the no. of communication rounds being $T/\tau$ as per our notation), 5 local steps on each client ($\tau = 5$, unless otherwise specified for some ablation experiments), and a batch size of 20 ($|B| = 20$). We perform experiments in the i.i.d. as well as non-i.i.d. settings. Results are reported for both settings without client sampling (10 clients) and with client sampling (10 clients sampled uniformly at random from 100 clients with participation fraction 0.1, and data split among all 100 clients) i.e., $n = 10$ active clients throughout. The i.i.d. experiments involve randomly shuffling the data and equally splitting the data between clients. For non-i.i.d. experiments, we assign every client samples from exactly two classes of the dataset, the splits being non-overlapping and balanced with each client having same number of samples (Li et al., 2020). Our code is based on publicly available repositories for SPS and FedAMS\footnote{SPS (\url{https://github.com/IssamLaradji/sps}), FedAMS (\url{https://github.com/jinghuichen/FedCAMs})}, and will be made available upon acceptance. FedSPS. The implementation is done according to Algorithm 1. Since all our experimental settings involve non-negative loss functions, we can use the lower bound $\ell_t^* = 0$ (Orvieto et al., 2022), throughout. In the following, we perform empirical sensitivity analysis for the free hyperparameters $\gamma_b$ and $c$, concluding that our method is indeed insensitive to changes in these parameters. We start with benchmarking our method by running some initial convex experiments performing classification of the MNIST (i.i.d.) dataset (LeCun et al., 2010) with a logistic regression model, without client sampling. In Figure 2(a), we compare the effect of varying $\gamma_b \in \{1, 5, 10\}$ on FedSPS, and varying $\gamma \in \{0.1, 0.01\}$ on FedAvg. We find that FedAvg is not robust to changing stepsize—converging well for stepsize 0.1, but very slow convergence for stepsize 0.01. On the contrary, all instances of FedSPS converge to a neighbourhood of the optimum—the size of the neighbourhood being proportional to $\gamma_b$, as suggested by the theory. We now fix $\gamma_b = 1$, and perform an ablation study to understand the effect of varying SPS scaling parameter $c$ on the convergence in Figure 2(c). For number of local steps $\tau = 5$, we vary $c$ from 0.01 to 40 (i.e., of the order of square of $\tau$). Unlike what is predicted by our theory, empirically we observe that small $c$ works better and larger $c$ leads to slower convergence. Moreover, all values of $c \in \{0.01, 0.1, 0.5, 1.0\}$ have similarly good convergence, thereby implying our method is robust to this hyperparameter and needs no tuning. We provide additional plots for $\tau \in \{10, 20, 50, 100\}$ local steps in Appendix F.2 to confirm that this observation is valid across all values of $\tau$, and plot the optimal value of $c$ versus $\tau$ for each case in Figure 2(d). Gaining insights from above experiments we fix $c = 0.5$ for all further experiments. **FedDecSPS.** We evaluate the performance of FedDecSPS with $c_t = c_0 \sqrt{t + 1}$. Similar to the sensitivity analysis of FedSPS towards $c$ (Figure 2), we performed ablations studies for a fixed value of $\gamma_b$ and varying $c_0$ as well as $\tau$. The observation is same as the previous case: the optimal value of $c_0$ does not scale according to $\tau$ as suggested by theory and we fix $c_0 = 0.5$ for all experiments. Similarly we fix $\gamma_b = 1$, following similar observations as before. We compare the convergence of FedSPS and FedDecSPS for the case of heterogeneous data on clients (i.e., $\sigma_f > 0$) in Figure 3(c) and (d), as well as Figure 5. We observe that our practically motivated FedDecSPS performs better in such non-interpolating settings, as expected. **FedAvg and FedAMS.** We compare the performance of our methods—FedSPS and FedDecSPS against the FedAvg baseline, and the state-of-the-art adaptive federated algorithm FedAMS [Wang et al., 2022a]. FedAvg and FedAMS need extensive tuning using grid search for client learning rate $\eta_l$, server learning rate $\eta_s$ as well as max stabilization factor $\epsilon$, and $\beta_1, \beta_2$. We refer readers to Appendix E.4 for details on the grid search performed and the optimal set of hyperparameters. **Convex comparison.** For the convex setting of logistic regression on MNIST dataset (i.i.d. setting), without client sampling, we now compare FedSPS with FedAvg and FedAMS in Figure 3(a). We see that the convergence of FedSPS matches that of the best tuned FedAvg. Note that while the best tuned FedAMS slightly outperforms our method, it requires considerable tuning depicted by the large margin between best and worst learning rate performances. For additional convex experiments in the more practical setting with client sampling, we take the problem of binary classification of LIBSVM [Chang & Lin, 2011] datasets (w8a, mushrooms, ijcnn, phishing, a9a) with logistic regression model in the i.i.d. setting. We report the performance on w8a in Figure 3(b), where FedSPS again converges similarly as tuned FedAvg, and better than FedAMS. We defer rest of the LIBSVM dataset plots to Appendix E. In the non-i.i.d. case we compare our proposed FedSPS and FedDecSPS to the FedAvg baseline, adaptive federated methods FedAMS and FedADAM, as well as another state-of-the-art federated method MIME \cite{karimireddy2021mime}. In this setting FedDecSPS does better than FedSPS, and our methods outperform the best tuned FedAvg, FedADAM and MIME. **Non-convex comparison.** For non-convex experiments, we look at multi-class classification of MNIST dataset using LeNet architecture \cite{lecun1998gradient} and CIFAR-10 dataset using ResNet18 architecture \cite{he2016deep} in the i.i.d. as well as non-i.i.d. setting (with client sampling), in Figures 4 and 5. For the upper bound on stepsizes, we use the smoothing technique for rest of the experiments, as suggested by \cite{loizou2021optimal} for avoiding sudden fluctuations in the stepsize. For a client $i \in [n]$ and iteration $t$, the adaptive iteration-dependent upper bound is given by $\gamma_{b,t} = 2^{|B|/m_i} \gamma_{b,t-1}$, where $|B|$ is the batch-size, $m_i$ is the number of data examples on that client and we fix $\gamma_{b,0} = 1$. In Figure 4 (MNIST), we find that FedSPS and FedSPS-Global converge almost identically, and their convergence is also very close to that of FedAvg with the best possible tuned local learning rate, while outperforming FedAMS. In Figure 5 (CIFAR-10), FedSPS and FedDecSPS outperform tuned FedAvg and FedAMS in terms of both training loss and test accuracy. ### 7 Conclusion In this paper, we show that locally adaptive federated optimization can lead to faster convergence by harnessing the geometric information of local objective functions. This is especially beneficial in the interpolating setting, which arises commonly for overparameterized deep learning problems. We propose a locally adaptive federated optimization algorithm FedSPS, by incorporating the stochastic Polyak stepsize in local steps, and prove sublinear and linear convergence to a neighbourhood for convex and strongly convex cases, respectively. We further extend our method to the decreasing stepsize version FedDecSPS, that enables exact convergence even in practical non-interpolating FL settings without compromising adaptivity. We perform relevant illustrative experiments to show that our proposed method is relatively insensitive to the hyperparameters involved, thereby requiring less tuning compared to other state-of-the-art federated algorithms. Moreover, our methods perform as good or better than tuned FedAvg and FedAMS for convex as well as non-convex experiments in i.i.d. and non-i.i.d. settings. REFERENCES Chih-Chung Chang and Chih-Jen Lin. Libsvm: a library for support vector machines. *ACM transactions on intelligent systems and technology (TIST)*, 2(3):1–27, 2011. Xiangyi Chen, Xiaoyun Li, and Ping Li. Toward communication efficient adaptive gradient method. In *Proceedings of the 2020 ACM-IMS on Foundations of Data Science Conference*, pp. 119–128, 2020. Ryan D’Orazio, Nicolas Loizou, Issam Laradji, and Ioannis Mitliagkas. Stochastic mirror descent: Convergence analysis and adaptive variants via the mirror stochastic polyak stepsize. *arXiv preprint arXiv:2110.15412*, 2021. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In *International Conference on Learning Representations*, 2021. URL https://openreview.net/forum?id=YicbFdNTTy John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. *Journal of machine learning research*, 12(7), 2011. Robert Gower, Othmane Sebbouh, and Nicolas Loizou. Sgd for structured nonconvex functions: Learning rates, minibatching and interpolation. In *International Conference on Artificial Intelligence and Statistics*, pp. 1315–1323. PMLR, 2021a. Robert M Gower, Aaron Defazio, and Michael Rabbat. Stochastic polyak stepsize with a moving target. *arXiv preprint arXiv:2106.11851*, 2021b. Robert M Gower, Mathieu Blondel, Nidham Gazagnadou, and Fabian Pedregosa. Cutting some slack for sgd with adaptive polyak stepsizes. *arXiv preprint arXiv:2202.12328*, 2022. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 770–778, 2016. Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. Advances and open problems in federated learning. *Foundations and Trends® in Machine Learning*, 14(1–2):1–210, 2021. Sai Praneeth Karimireddy, Martin Jaggi, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian U Stich, and Ananda Theertha Suresh. Breaking the centralized barrier for cross-device federated learning. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, volume 34, pp. 28663–28676. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper_files/paper/2021/file/f0e6be4ce76ccfa73c5a540d992d0756-Paper.pdf Junhyung Lyle Kim, Mohammad Taha Toghani, César A Uribe, and Anastasios Kyrillidis. Adaptive federated learning with auto-tuned clients. *arXiv preprint arXiv:2306.11201*, 2023. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint arXiv:1412.6980*, 2014. Anastasia Koloskova, Nicolas Loizou, Sadra Boreiri, Martin Jaggi, and Sebastian U. Stich. A unified theory of decentralized SGD with changing topology and local updates. In *37th International Conference on Machine Learning (ICML)*. PMLR, 2020. Anastasiia Koloskova, Sebastian U Stich, and Martin Jaggi. Sharper convergence guarantees for asynchronous sgd for distributed and federated learning. *Advances in Neural Information Processing Systems*, 35:17202–17215, 2022. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. *Proceedings of the IEEE*, 86(11):2278–2324, 1998.
NqQjoncEDR
Similarly, in Figure 2, for ''Same domain'', ''Diff. class'' and ''Diff. domain + Diff. class'', selective sampling is much better than selective Mixup, does this indicate that vanilla Mixup is harmful in this case? Such observations are more evident in Figure 8. The authors have not discussed the reasons behind the occasional superiority of selective Mixup over selective sampling.
SELECTIVE MIXUP HELPS WITH DISTRIBUTION SHIFTS, BUT NOT (ONLY) BECAUSE OF MIXUP Anonymous authors Paper under double-blind review ABSTRACT Context. Mixup is a highly successful technique to improve generalization of neural networks by augmenting the training data with combinations of random pairs. Selective mixup is a family of methods that apply mixup to specific pairs, e.g. only combining examples across classes or domains. These methods have claimed remarkable improvements on benchmarks with distribution shifts, but their mechanisms and limitations remain poorly understood. Findings. We examine an overlooked aspect of selective mixup that explains its success in a completely new light. We find that the non-random selection of pairs affects the training distribution and improve generalization by means completely unrelated to the mixing. For example in binary classification, mixup across classes implicitly resamples the data for a uniform class distribution — a classical solution to label shift. We show empirically that this implicit resampling explains much of the improvements in prior work. Theoretically, these results rely on a “regression toward the mean”, an accidental property that we identify in several datasets. Takeaways. We have found a new equivalence between two successful methods: selective mixup and resampling. We identify limits of the former, confirm the effectiveness of the latter, and find better combinations of their respective benefits. 1 INTRODUCTION Mixup and its variants are some of the few methods that improve generalization across tasks and modalities with no domain-specific information (Zhang et al., 2017). Standard mixup replaces training data with linear combinations of random pairs of examples, proving successful e.g. for image classification (Yun et al., 2019b), semantic segmentation (Islam et al., 2023), natural language processing (Verma et al., 2019), and speech processing (Meng et al., 2021). This paper focuses on scenarios of distribution shift and variants of mixup that improve out-of-distribution (OOD) generalization. We examine the family of methods that apply mixup on selected pairs of examples, which we refer to as selective mixup (Hwang et al., 2022; Li et al., 2023; Lu et al., 2022a; Palakkadavath et al., 2022; Tian et al., 2023; Xu et al., 2020; Yao et al., 2022b). Each method uses a predefined criterion\(^1\) for example combining examples across classes (Yao et al., 2022b) (Figure 1) or across domains (Xu et al., 2020; Li et al., 2023; Lu et al., 2022a). These simple heuristics have claimed remarkable improvements on benchmarks such as DomainBed (Gulrajani and Lopez-Paz, 2020), WILDS (Koh et al., 2021), and Wild-Time (Yao et al., 2022a). Despite impressive empirical performance, the theoretical mechanisms of selective mixup remain obscure. For example, the selection criteria in Yao et al. (2022b) include the selection of pairs of the same class/different domains but also the exact opposite. This raises questions: 1. What makes each selection criterion suitable to any specific dataset? 2. Are there multiple mechanisms responsible for the improvements with selective mixup? This paper presents surprising answers, highlighting an overlooked side effect of selective mixup. The non-random selection of pairs implicitly biases the training distribution and improve generalization by means completely unrelated to the mixing. We observe empirically that simply forming mini-batches with all instances of the selected pairs (without mixing them) often produces the same improvements as mixing them. This critical ablation was absent from prior studies. \(^1\)We focus on the basic implementation (Yao et al., 2022b) without modifications to the learning objective. Selective mixup is a family of methods that replace the training data with combined pairs of examples fulfilling a predefined criterion, e.g., pairs from different classes. An overlooked side effect is to modify the training distribution: here, sampling classes more uniformly. This is responsible for much of the observed improvements in OOD generalization. We also analyze theoretically the resampling induced by different selection criteria. We find that conditioning on a “different attribute” (e.g., combining examples across classes or domains) brings the training distribution of this attribute closer to a uniform one. Consequently, the imbalances in the data often “regress toward the mean” with selective mixup. We verify empirically that several datasets do indeed shift toward a uniform class distribution in their test split (see Figure 1). We also find remarkable correlation between improvements in performance and the reduction in divergence of training/test distributions due to selective mixup. This also predicts a new failure mode of selective mixup when the above property does not hold (see Appendix C). Our contributions are summarized as follows. - We point out an overlooked resampling effect when applying selective mixup (Section 3). - We show theoretically that certain selection criteria induce a bias in the distribution of features and/or classes equivalent to a “regression toward the mean” (Theorem 3.1). In binary classification for example, selecting pairs across classes is equivalent to sampling uniformly over classes, the standard approach to address label shift and imbalanced data. - We verify empirically that multiple datasets indeed contain a regression toward a uniform class distribution across training and test splits (Section 4.6). We also find that improvements from selective mixup correlate with reductions in divergence of training/test distributions over labels and/or covariates. This strongly suggests that resampling is the main driver for these improvements. - We compare many selection criteria and resampling baselines on five datasets. In all cases, improvements with selective mixup are partly or fully explained by resampling effects (Section 4). The implications for future research are summarized as follows. - We connect two areas of the literature by showing that selective mixup is sometimes equivalent to resampling, a classical strategy for distribution shifts (Garg et al., 2023; Idrissi et al., 2022). This hints at possible benefits from advanced methods for label shift and domain adaptation on benchmarks with distribution shifts. - The resampling explains why different criteria in selective mixup benefit different datasets: they affect distributions of features and/or labels thus addressing covariate/label shift. - This explanation highlights the risk of overfitting to the benchmarks: much of the improvements rely on the accidental “regression toward the mean” in the datasets examined. 2 BACKGROUND: MIXUP AND SELECTIVE MIXUP Notations. We consider a classification model $f_\theta : \mathbb{R}^d \rightarrow [0, 1]^C$ of learned parameters $\theta$. It maps an input vector $x \in \mathbb{R}^d$ to a vector $y$ of scores over $C$ classes. The training data is typically a set of labeled examples $D = \{(x_i, y_i, d_i)\}_{i=1}^n$ where $y_i$ are one-hot vectors encoding ground-truth labels, and $d_i \in \mathbb{N}$ are optional discrete domain indices. Domain labels are available e.g., in datasets with different image styles (Li et al., 2017) or collected over different time periods (Koh et al., 2021). Training with ERM. Standard empirical risk minimization (ERM) optimizes the model’s parameters for $\min_\theta R(f_\theta, D)$. The expected training risk for a chosen loss function $L$ is: $$R(f_\theta, D) = \mathbb{E}_{(x, y) \in D} L(f_\theta(x), y).$$ An empirical estimate is obtained with an arithmetic mean over instances of the dataset $D$. Training with mixup. Standard mixup essentially replaces training examples with linear combinations of random pairs in both input and label space. We formalize it by redefining the training risk: $$R_{\text{mixup}}(f_\theta, D) = \mathbb{E}_{(x,y) \in D} L(f(cx + (1-c)\tilde{x}, cy + (1-c)\tilde{y}))$$ with mixing coefficients $c \sim B(2, 2)$ and paired examples $(\tilde{x}, \tilde{y}) \sim D$. The expectation is approximated by sampling coefficients and pairs at every training iteration. Selective mixup. While standard mixup combines random pairs, selective mixup only combines pairs that fulfill a predefined criterion. To select these pairs, the method starts with the original data $D$, then for every $(x, y, d) \in D$ it selects a $(\tilde{x}, \tilde{y}, \tilde{d}) \in D$ such that they fulfill the criterion represented by the predicate $\text{Paired}(\cdot, \cdot)$. For example, the criterion "same class, different domain" ("intra-label LISA" in [Yao et al., 2022]) is implemented as: $$\text{Paired}((x_i, y_i, d_i), (\tilde{x}_i, \tilde{y}_i, \tilde{d}_i)) = \text{true iff } (\tilde{y} = y) \land (\tilde{d} \neq d) \quad (\text{same class, diff. domain})$$ Other examples: - $$\text{Paired}((x_i, y_i, d_i), (\tilde{x}_i, \tilde{y}_i, \tilde{d}_i)) = \text{true iff } (\tilde{y} \neq y) \quad (\text{different class})$$ - $$\text{Paired}((x_i, y_i, d_i), (\tilde{x}_i, \tilde{y}_i, \tilde{d}_i)) = \text{true iff } (\tilde{d} = d) \quad (\text{same domain})$$ 3 SELECTIVE MIXUP MODIFIES THE TRAINING DISTRIBUTION The new claims of this paper comprise two parts. 1. Estimating the training risk with selective mixup (Eq. 2) uses a different sampling of examples from $D$ than ERM (Eq. 1). We demonstrate this theoretically in this section. 2. We hypothesize that this different sampling of training examples influences the generalization properties of the learned model, regardless of the mixing operation. We verify this empirically in Section 4 using ablations of selective mixup that omit the mixing operation — a critical baseline absent from prior studies. Training distribution. This distribution refers to the examples sampled from $D$ to estimate the training risk (Eq. 1 or 2) — whether these are then mixed or not. The following discussion focuses on distributions over classes ($y$) but analogous arguments apply to covariates ($x$) and domains ($d$). With ERM, the training distribution equals the dataset distribution because the expectation in Eq. 1 is over uniform samples of $D$. We obtain an empirical estimate by averaging all one-hot labels, giving the vector of discrete probabilities $p_Y(D) = \oplus_{(x,y) \in D} y / |D|$ where $\oplus$ is the element-wise sum. With selective mixup, evaluating the risk (Eq. 2) requires pairs of samples. The first element of a pair is sampled uniformly, yielding the same $p_Y(D)$ as ERM. The second element is selected as described above, using the first element and one chosen predicate $\text{Paired}(\cdot, \cdot)$ e.g. from (4a–4c). For our analysis, we denote these “second elements” of the pairs as the virtual data: $$\tilde{D} = \{(\tilde{x}_i, \tilde{y}_i, \tilde{d}_i) \sim D : \text{Paired}((x_i, y_i, d_i), (\tilde{x}_i, \tilde{y}_i, \tilde{d}_i)) = \text{true}, \forall i = 1, \ldots, |D|\}.$$ We can now analyze the overall training distribution of selective mixup. An empirical estimate is obtained by combining the distributions resulting from the two elements of the pairs, which gives the vector $p_Y(D \cup \tilde{D}) = (p_Y(D) \oplus p_Y(\tilde{D})) / 2$. Regression toward the mean. With the criterion "same class", it is obvious that $p_Y(\tilde{D}) = p_Y(D)$. Therefore these variants of selective mixup are not concerned with resampling effects. In contrast, the criteria "different class" or "different domain" do bias the sampling. In the case of binary classification, we have $p_Y(\tilde{D}) = 1 - p_Y(D)$ and therefore $p_Y(D \cup \tilde{D})$ is uniform. This means that selective mixup with the "different class" criterion has the side effect of balancing the training distribution of classes, a classical mitigation of class imbalance ([Japkowicz, 2000; Kubat et al., 1997]). For multiple classes, we have a more general result. Theorem 3.1. Given a dataset $D = \{(x_i, y_i)\}_i$ and paired data $\tilde{D}$ sampled according to the "different class" criterion, i.e. $\tilde{D} = \{(\tilde{x}_i, \tilde{y}_i) \sim D \text{ s.t. } \tilde{y}_i \neq y_i\}$, then the distribution of classes in $D \cup \tilde{D}$ is... more uniform than in $D$. Formally, the entropy $\mathbb{H}(p_Y(D)) \leq \mathbb{H}(p_Y(D \cup \tilde{D}))$. Proof: see Appendix D. Theorem 3.1 readily extends in two ways. First, the same effect also results from the different domain criterion: if each domain contains a different class distribution, the resampling from this criterion averages them out, yielding a more uniform aggregated training distribution. Second, this averaging applies not only to class labels ($y$) but also covariates ($x$). An analysis using distributions is ill-suited but the mechanism similarly affects the sampling of covariates when training with selective mixup. When does one benefit from the resampling (regardless of mixup)? The above results mean that selective mixup can implicitly reduce imbalances (a.k.a. biases) in the training data. When these are not spurious and also exist in the test data, the effect on predictive performance could be detrimental. We expect benefits (verified in Section 4) on datasets with distribution shifts. By definition, their training/test splits contain different imbalances. Softening imbalances in the training data is then likely to bring the training and test distributions closer, in particular with extreme shifts such as the complete reversal of a spurious correlation (e.g. waterbirds dataset, see Section 4.1). We also expect benefits on worst-group metrics (e.g. civilComments dataset, see Section 4.4). The challenge in these datasets comes from the imbalance of class/domain combinations. Prior work has indeed shown that balancing is beneficial (Idrissi et al., 2022; Sagawa et al., 2019). 4 EXPERIMENTS We performed a large number of experiments to understand the contribution of the different effects of selective mixup and other resampling baselines (complete results in Appendix B). Datasets. We focus on five datasets that previously showed improvements with selective mixup. We selected them to cover a range of modalities (vision, NLP, tabular), settings (binary, multiclass), and types of shifts (covariate, label, and subpopulation shifts). - **Waterbirds** (Sagawa et al., 2019) is a popular artificial dataset used to study distribution shifts. The task is to classify images of birds into two types. The image backgrounds are also of two types, and the correlation between birds and backgrounds is reversed across the training and test splits. The type of background in each image serves as its domain label. - **CivilComments** (Koh et al., 2021) is a widely-used dataset of online text comments to be classified as toxic or not. Each example is labeled with a topical attribute (e.g. Christian, male, LGBT, etc.) that is spuriously associated with ground truth labels in the training data. These attributes serve as domain labels. The target metric is the worst-group accuracy where the groups correspond to all toxicity/attribute combinations. - **Wild-Time Yearbook** (Yao et al., 2022a) contains yearbook portraits to be classified as male or female. It is part of the Wild-Time benchmark, which is a collection of real-world datasets captured over time. Each example belongs to a discrete time period that serves as its domain label. Distinct time periods are assigned to the training and OOD test splits (see Figure 10). - **Wild-Time arXiv** (Yao et al., 2022a) contains titles of arXiv preprints. The task is to predict each paper’s category among 172 classes. Time periods serve as domain labels. - **Wild-Time MIMIC-Readmission** (Yao et al., 2022a) contains hospital records (sequences of codes representing diagnoses and treatments) to be classified into two classes. The positive class indicates the readmission of the patient at the hospital within 15 days. Time periods serve as domain labels. Methods. We train standard architectures suited to each dataset with the methods below (details in Appendix A). We perform early stopping i.e. recording metrics for each run at the epoch of highest ID or worst-group validation performance (for Wild-Time and waterbirds/civilComments datasets respectively). We plot average metrics in bar charts over 9 different seeds with error bars representing ± one standard deviation. ERM and vanilla mixup are the standard baselines. Baseline resampling uses training examples with equal probability from each class, domain, or combinations thereof as in Idrissi et al. (2022); Sagawa et al. (2019). Selective mixup (■) includes all possible selection criteria based on classes and domains. We avoid ambiguous terminology from earlier works because of inconsistent usage (e.g. “intra-label LISA” means “different domain” in Koh et al. (2021) but not in Yao et al. (2022a)). Selective sampling (□) is a novel ablation of selective mixup where the selected pairs are not mixed, but the instances are appended one after another in the mini-batch. Half are dropped at random to keep the mini-batch size identical to the other methods. Therefore any difference between selective sampling and ERM is attributable only to resampling effects. We also include novel combinations (■) of sampling and mixup. 4.1 RESULTS ON THE waterbirds DATASET The target metric for this dataset is the worst-group accuracy, with groups defined as the four class/domain combinations. The two difficulties are (1) a class imbalance (77/23%) and (2) a correlation shift (spurious class/domain association reversed at test time). See discussion in Figure 2. ![Figure 2: Main results on waterbirds.](image) We first observe that vanilla mixup is detrimental compared to ERM. Resampling with uniform class/domain combinations is hugely beneficial, for the reasons explained in Figure 3. The ranking of various criteria for selective sampling is similar whether with or without mixup. Most interestingly, the best criterion performs similarly, but no better than the best resampling. The excellent performance of the best version of selective mixup is here entirely due to resampling. The efficacy of resampling on this dataset is not a new finding (Idrissi et al., 2022; Sagawa et al., 2019). What is new is its equivalence with the best variant of selective mixup. Figure 3 further supports this claim by comparing proportions of classes and domains sampled by each method. ![Figure 3: The sampling ratios of each class/domain clearly explain the performance of the best methods (waterbirds).](image) Resampling uniform combinations gives them all equal weights, just like the worst-group target metric. Selective mixup with same domain/diff. class also gives equal weights to the classes, while breaking the spurious pattern between groups and classes, unlike any other criterion. 4.2 RESULTS ON THE yearbook DATASET The difficulty of this dataset comes from a slight class imbalance and the presence of covariate/label shift (see Figure 10). The test split contains several domains (time periods). The target metric is the worst-domain accuracy. Figure 4 shows that vanilla mixup is slightly detrimental compared to ERM. Resampling for uniform classes gives a clear improvement because of the class imbalance. With selective sampling (no mixup), the only criteria that improve over ERM contain “different class”. This is expected because this criterion implicitly resamples for a uniform class distribution. To investigate whether some of the improvements are due to resampling, we measure the divergence between training and test distributions of classes and covariates (details in Appendix A). Figure 5 shows first that there is a clear variation among different criteria (● blue dots) i.e. some bring the training/test distributions closer to one another. Second, there is a remarkable correlation between the test accuracy and the divergence, on both classes and covariates. This means that resampling effects do occur and also play a part in the best variants of selective mixup. Finally, the improvements from simple resampling and the best variant of selective mixup suggest a new combination. We train a model with uniform class sampling and selective mixup using the --- 3 As expected, the correlation is reversed for the first two test domains in Figure 5 since they are even further from a uniform class distribution than the average of the training data, as seen in Figure 10. Figure 4: Main results on yearbook. With selective mixup, the “different class” criterion is not useful, but “same class” performs significantly better than ERM. Since this criterion alone does not have resampling effects, it indicates a genuine benefit from mixup restricted to pairs of the same class. Figure 5: Different selection criteria (●) modify the distribution of both covariates and labels (upper and lower rows). The resulting reductions in divergence between training and test distributions correlate remarkably well with test performance. This confirms the contribution of resampling to the overall performance of selective mixup. “same class” criterion, and obtain performance superior to all existing results (last row in Figure 5). This confirms the complementarity of the effects of resampling and within-class selective mixup. 4.3 Results on the arXiv Dataset This dataset has difficulties similar to yearbook and also many more classes (172). Simple resampling for uniform classes is very bad (literally off the chart in Figure 6) because it overcorrects the imbalance (the test distribution being closer to the training than to a uniform one). Uniform domains is much better since its effect is similar but milder. All variants of selective mixup (■) perform very well, but they improve over ERM even without mixup (●). And the selection criteria rank similarly with or without mixup, suggesting that parts of the improvements of selective mixup is due to the resampling. Given that vanilla mixup also clearly improves over ERM, the performance of selective mixup is explained by cumulative effects of vanilla mixup and resampling effects. This also suggests new combinations of methods (▲) among which we find one version marginally better than the best variant of selective mixup (last row). 4.4 Results on the civilComments Dataset This dataset mimics a subpopulation shift because the worst-group metric requires high accuracy on classes and domains under-represented in the training data. It also contains an implicit correlation shift because any class/domain association (e.g. “Christian” comments labeled as toxic more often than not) becomes spurious when evaluating individual class/domain combinations. 4.5 Results on the MIMIC-Readmission Dataset This dataset contains a class imbalance (about 78/22% in training data), label shift (the distribution being more balanced in the test split), and possibly covariate shift. It is unclear whether the task is causal or anticausal (labels causing the features) because the inputs contain both diagnoses and treatments. The target metric is the area under the ROC curve (AUROC) which gives equal importance to both classes. We report the worst-domain AUROC, i.e. the lowest value across test time periods. Vanilla mixup performs a bit better than ERM. Because of the class imbalance, resampling for uniform classes also improves ERM. As expected, this is perfectly equivalent to the selective sampling criterion “diffClass” and they perform therefore equally well. Adding mixup is yet a bit better, which suggests again that the performance of selective mixup is merely the result of the independent effects of vanilla mixup and resampling. We further verify this explanation with the novel combination of To investigate the contribution of resampling, we measure the divergence between training/test class distributions and plot them against the test accuracy (Figure 7). We observe a strong correlation across methods. Mixup essentially offsets the performance by a constant factor. This suggests again the independence of the effects of mixup and resampling. The resampling baselines (●) also roughly agree with a linear fit to the “selective sampling” points. We therefore hypothesize that all these methods are mostly addressing label shift. We verify this hypothesis with the remarkable fit of an additional point (▲) of a model trained by resampling according to the test set class distribution, i.e. cheating. It represents an upper bound that might be achievable in future work with methods for label shift (Azizzadenesheli et al., 2019; Lipton et al., 2018). We replicated these observations on every test domain of this dataset (Figure 15 in the appendix). For the above reasons, it makes sense that resampling for uniform classes or combinations greatly improves performance, as shown in prior work (Idrissi et al., 2022). With selective mixup (■), some criterion (same domain/diff. class) performs clearly above all others. But it works even better without mixup! (▲) Among many other variations, none surpasses the uniform-combinations baseline. simple resampling and vanilla mixup, and observe almost no difference whether the mixing operation is performed or not (last two rows in Figure 9). To further support the claim that these methods mostly address label shift, we report in Table 1 the proportion of the majority class in the training and test data. We observe that the distribution sampled by the best training methods brings it much closer to that of the test data. | Proportion of majority class (%) | | |---------------------------------|-------| | In the dataset (training) | 78.2 | | In the dataset (validation) | 77.8 | | In the dataset (OOD test) | **66.5** | | Sampled by different training methods | | |--------------------------------------|-------| | Resampling (uniform classes) | 50.0 | | Diff. domain + diff. class | 50.0 | | Diff. class | 50.1 | | Same domain + Diff. class | 49.9 | | Resampling (uniform cl.) + concatenated pairs | **64.3** | | Resampling (uniform cl.) + vanilla mixup | **64.3** | Table 1: The performance of the various methods on MIMIC-Readmission is explained by their correction of a class imbalance. The best training methods (boxed numbers) sample the majority class in a proportion much closer to that of the test data. 4.6 Evidence of a “Regression toward the Mean” in the Data We hypothesized in Section 3 that resampling helps because of a “regression toward the mean” between training and test splits. We now check for this property and find indeed a shift toward uniform class distributions in all datasets studied. For the Wild-Time datasets, we plot in Figure 10 the ratio of the minority class (for binary tasks: yearbook, MIMIC) and class distribution entropy (for the multiclass task: arXiv). Finding this property agrees with the proposed explanation and with the fact that we selected all three datasets because they previously showed improvements with selective mixup in Yao et al. (2022a). The shift toward uniformity also holds in waterbirds and civilComments, artificially through the worst-group metric. The training data contains imbalanced groups (class/domain combinations) while the worst-group accuracy gives uniform importance to all groups. Figure 10: The class distribution shifts toward uniformity in these Wild-Time datasets. This agrees with the explanation that the benefits from resampling rely on a “regression toward the mean”. 5 Related Work Mixup and variants. Mixup was originally introduced in Zhang et al. (2017), and numerous variants followed (Cao et al., 2022). Many propose modality-specific mixing operations: CutMix (Yun et al., 2019a) replaces linear combinations with collages of image patches, Fmix (Harris et al., 2020) combines image regions based on frequency contents, AlignMixup (Venkataramanan et al., 2022) combines images after spatial alignment. Manifold-mixup (Verma et al., 2019) replaces the mixing in input space with the mixing of learned representations, making it applicable to text embeddings. **Mixup for OOD generalization.** Mixup has been integrated into existing techniques for domain adaptation (DomainMix (Xu et al., 2020)), domain generalization (FIXED (Lu et al., 2022b)), and with meta learning (RegMixup (Pinto et al., 2022)). This paper focuses on variants we call “selective mixup” that use non-uniform sampling of the pairs of mixed examples. LISA (Yao et al., 2022b) proposes two heuristics, same-class/different-domain and vice versa, used in proportions tuned by cross-validation on each dataset. Palakkadavath et al. (2022) use same-class pairs and an additional objective to encourage invariance of the representations to the mixing. CIFair (Tian et al., 2023) uses same-class pairs with a contrastive objective to improve algorithmic fairness. SelecMix (Hwang et al., 2022) proposes a selection heuristic to handle biased training data: same class/different biased attribute, or vice versa. DomainMix (Xu et al., 2020) uses different-domain pairs for domain adaptation. DRE (Li et al., 2023) uses same-class/different-domain pairs and regularize their Grad-CAM explanations to improve OOD generalization. SDMix (Lu et al., 2022a) applies mixup on examples from different domains with other improvements to improve cross-domain generalization for activity recognition. **Explaining the benefits of mixup** has invoked regularization (Zhang et al., 2020) and augmentation (Kimura, 2021) effects, the introduction of label noise (Liu et al., 2023), and the learning of rare features (Zou et al., 2023). These works focus on the mixing and in-domain generalization, whereas we focus on the selection and OOD generalization. **Training on resampled data.** We find that selective mixup is sometimes equivalent to training on resampled or reweighted data. Both are standard tools to handle distribution shifts in a domain adaptation setting (Japkowicz, 2000; Kubat et al., 1997) and are also known as importance-weighted empirical risk minimization (IW-ERM) (Shimodaira, 2000; Gretton et al., 2009). For covariate shifts, IW-ERM assigns each training point $x$ of label $y$ a weight equal to the likelihood ratio $\frac{p_{\text{target}}(x)}{p_{\text{source}}(x)}$, and for label shifts, $\frac{p_{\text{target}}(y)}{p_{\text{source}}(y)}$ (Azizzadenesheli et al., 2019; Lipton et al., 2018). Several works recently showed that reweighting and resampling are competitive with the state of the art in various OOD (Idrissi et al., 2022; Park et al., 2022; Perrett et al., 2023; Sagawa et al., 2019) and label-shift settings (Garg et al., 2023). ### 6 CONCLUSIONS AND OPEN QUESTIONS **Conclusions.** This paper helps understand selective mixup, which is one of the most successful and general methods for distribution shifts. We showed unambiguously that much of the improvements were actually unrelated to the mixing operation and could be obtained with much simpler, well-known resampling methods. On datasets where mixup does bring benefits, we could then obtain even better results by combining the independent effects of the best mixup and resampling variants. **Limitations.** We focused on the simplest version selective mixup as described by Yao et al. (2022b). Many papers combine the principle with modifications to the learning objective (Hwang et al., 2022; Li et al., 2023; Lu et al., 2022a; Palakkadavath et al., 2022; Tian et al., 2023; Xu et al., 2020). Resampling likely plays a role in these methods too but this claim requires further investigation. We evaluated “only” five datasets. Since we introduced simple ablations that can single out the effects of resampling, we hope to see future re-evaluations of other datasets. Because we picked datasets that had previously shown benefits with selective mixup, we cannot fully verify the predicted failure when there is no “regression toward the mean” in the data. Still, we do present one experiment in Appendix C that convincingly verifies this prediction on yearbook by swapping the ID and OOD data. Finally, this work is not about designing new algorithms to surpass the state of the art. Our focus is on improving the scientific understanding of existing mixup strategies and their limitations. **Open questions.** Our results leave open the question of the applicability of selective mixup to real situations. The “regression toward the mean” explanation indicates that much of the observed improvements are accidental since they rely on an artefact of some datasets. In real deployments, distribution shifts cannot be foreseen in nature nor magnitude. This is a reminder of the relevance of Goodhart’s law to machine learning (Teney et al., 2020) and of the risk of overfitting to popular benchmarks (Liao et al., 2021). REFERENCES Kamyar Azizzadenesheli, Anqi Liu, Fanny Yang, and Animashree Anandkumar. Regularized learning for domain adaptation under label shifts. *arXiv preprint arXiv:1903.09734*, 2019. Chengtai Cao, Fan Zhou, Yurou Dai, and Jianping Wang. A survey of mix-based data augmentation: Taxonomy, methods, applications, and explainability. *arXiv preprint arXiv:2212.10888*, 2022. Saurabh Garg, Nick Erickson, James Sharpack, Alex Smola, Sivaraman Balakrishnan, and Zachary C Lipton. Rlsbench: Domain adaptation under relaxed label shift. *arXiv preprint arXiv:2302.03020*, 2023. Arthur Gretton, Alex Smola, Jiayuan Huang, Marcel Schmittfull, Karsten Borgwardt, and Bernhard Schölkopf. Covariate shift by kernel mean matching. *Dataset shift in machine learning*, 2009. Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. *arXiv preprint arXiv:2007.01434*, 2020. Ethan Harris, Antonia Marcu, Matthew Painter, Mahesan Niranjan, Adam Prügel-Bennett, and Jonathon Hare. Fmix: Enhancing mixed sample data augmentation. *arXiv preprint arXiv:2002.12047*, 2020. Inwoo Hwang, Sangjun Lee, Yunhyeok Kwak, Seong Joon Oh, Damien Teney, Jin-Hwa Kim, and Byoung-Tak Zhang. Selecmix: Debiased learning by contradicting-pair sampling. *arXiv preprint arXiv:2211.02291*, 2022. Badr Youbi Idrissi, Martin Arjovsky, Mohammad Pezeshki, and David Lopez-Paz. Simple data balancing achieves competitive worst-group-accuracy. In *Conference on Causal Learning and Reasoning*, 2022. Md Amirul Islam, Matthew Kowal, Konstantinos G Derpanis, and Neil DB Bruce. Segmix: Co-occurrence driven mixup for semantic segmentation and adversarial robustness. *International Journal of Computer Vision*, 131(3):701–716, 2023. Nathalie Japkowicz. The class imbalance problem: Significance and strategies. In *Proc. of the Int’l Conf. on artificial intelligence*, volume 56, pages 111–117, 2000. Masanari Kimura. Why mixup improves the model performance. In *International Conference on Artificial Neural Networks (ICANN)*, 2021. Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Bal-subramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, et al. Wilds: A benchmark of in-the-wild distribution shifts. In *International Conference on Machine Learning*, 2021. Miroslav Kubat, Stan Matwin, et al. Addressing the curse of imbalanced training sets: one-sided selection. In *Icml*, volume 97, page 179. Citeseer, 1997. Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. Deeper, broader and artier domain generalization. In *IEEE International Conference on Computer Vision*, pages 5542–5550, 2017. Tang Li, Fengchun Qiao, Mengmeng Ma, and Xi Peng. Are data-driven explanations robust against out-of-distribution data? *arXiv preprint arXiv:2303.16390*, 2023. Thomas Liao, Rohan Taori, Inioluwa Deborah Raji, and Ludwig Schmidt. Are we learning yet? a meta review of evaluation failures across machine learning. In *Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)*, 2021. Zachary Lipton, Yu-Xiang Wang, and Alexander Smola. Detecting and correcting for label shift with black box predictors. In *International conference on machine learning*, 2018. Zixuan Liu, Ziqiao Wang, Hongyu Guo, and Yongyi Mao. Over-training with mixup may hurt generalization. *arXiv preprint arXiv:2303.01475*, 2023.
QIrYb3Vlze
It seems like the $H$ space in [1] (considered a reliable semantic space by the authors) is obtained from the pre-trained Diffusion Models. If Diffusion Models are trained from scratch with an additional objective, how do the authors ensure that the $H$ space in [1] and the $H$ space in this paper have similar properties?
ISOMETRIC REPRESENTATION LEARNING FOR DISENTANGLING LATENT SPACE OF DIFFUSION MODELS Anonymous authors Paper under double-blind review ABSTRACT Diffusion models have made remarkable progress in capturing and reproducing real-world data. Despite their success and further potential, their latent space, the core of diffusion models, mostly still remains unexplored. In fact, the latent spaces of existing diffusion models still do not align close with the human perception, entangling multiple concepts in a distorted space. In this paper, we present Isometric Diffusion, equipping a diffusion model with isometric representation learning to better reflect human intuition and understanding of visual data. Specifically, we propose a novel loss to promote isometry of the mapping between the latent space and the data manifold, enabling a semantically and geometrically better latent space. This approach allows diffusion models to learn a more disentangled latent space, enabling smoother interpolation and precise control over attributes directly in the latent space. Our extensive experiments demonstrate the effectiveness of Isometric Diffusion, suggesting that our method helps to align latent space with perceptual semantics. This work paves the way for fine-grained data generation and manipulation. 1 INTRODUCTION Generative models produce images, texts, or other types of data by learning the distribution of the observed samples in its latent space and how to map it to the actual data space. In general, we desire the latent space to reflect the human perception. That is, we wish we could find a linear subspace of the latent space that is aligned with an attribute that human perceives important to distinguish the observed samples. Equivalently, we would locate samples that look semantically similar to human nearby, and vice versa, in the latent space. Such a latent space easily disentangles the key attributes from the human’s perspective, allowing us to control the generated samples as desired. Recently, diffusion models (Sohl-Dickstein et al., 2015; Song & Ermon, 2019; Ho et al., 2020; Song et al., 2020b) have achieved unprecedented success across multiple fields, including image generation (Dhariwal & Nichol, 2021; Nichol et al., 2021; Ramesh et al., 2022; Saharia et al., 2022; Rombach et al., 2022), image editing (Kawar et al., 2023; Ruiz et al., 2023; Hertz et al., 2022), and video generation (Ho et al., 2022; Blattmann et al., 2023). However, compared to other generative models like generative adversarial network (GAN) (Goodfellow et al., 2014) or variational autoencoder (VAE) (Kingma & Welling, 2013), there are few studies exploring the latent space of diffusion models. Due to their iterative sampling process that progressively removes noise from random initial vectors, it is complicated to analyze or manipulate the latent vectors. A naive latent walking by linear interpolation between two latent vectors, for example, turns out to produce unwanted intermediate images, as illustrated in Fig. 1 (top). A couple of recent works report important observations about the latent space \( \mathcal{X} \) learned by diffusion models. First of all, Kwon et al. (2023) discovers that a diffusion model already has a semantic latent space \( \mathcal{H} \) in the intermediate feature space of its score model. They suggest that \( \mathcal{H} \) is semantically well-defined and locally Euclidean, and thus linear perturbations in \( \mathcal{H} \) would lead to approximately linear changes in semantic attributes. However, manipulating attributes indirectly through \( \mathcal{H} \) is not fully desirable. One reason is additional computations accompanied with this indirect manipulation, as it requires two times of entire reverse diffusion process. According to Kwon et al. (2023), asymmetric reverse process is required for image change, and this requires two independent inferences of the score model with different inputs: \( \epsilon_t(x_t) \) and \( \hat{\epsilon}_t(x_t) = \epsilon_t(x_t; f(x_t, t)) \), where \( f \) is an additional neural network to find the editing direction. Another computational cost comes from training \( f \) to find local editing directions at every point of \( H \) for accounting every time after stepping forward in \( H \). With this indirect approach, a clear relationship between \( X \) and \( H \) has not been established, leaving it as an open question how to directly manipulate a particular attribute from the latent vector \( x \in X \) instead of \( (x, h) \in X \otimes H \). A subsequent work (Park et al., 2023b) suggests that a spherical linear interpolation (Slerp) in \( X \) is close to geodesic in \( H \), which implies it approximates a linear interpolation (Lerp) in \( H \). This discovery indicates that we may be able to manipulate semantics of a generated image directly in \( X \), with some care on the spherical geometry of the latent space. To illustrate, we explore \( X \) by sequentially generating images on a spherically interpolated trajectory between two latent vectors, \( x, x' \in X \). Fig. 1(mid) illustrates that it is not a geodesic on the data manifold; on the trajectory between two men, it unnecessarily goes through an woman. This can be interpreted that there exists some distortion in the latent space of diffusion models, implying that they fail to adequately preserve the geometric structure of the data manifold. In other words, the latent space and perceptual semantics do not align well. Such a misalignment often leads to entanglement of multiple semantic concepts, making it tricky to conduct fine-grained manipulations. Motivated from the desire to directly align the latent space with the data manifold, we present Iso-metric Diffusion, a diffusion model equipped with isometric representation learning, where isometry is a distance preserving map between metric spaces, which also preserves geodesics. More specifically, we introduce a novel loss to encourage isometry between \( X \) and the data manifold. With this additional supervision, the learned \( X \) allows semantically disentangled geodesic traversal and smoother interpolation with less abrupt changes when navigating \( X \), as illustrated in Fig. 1(bottom). We demonstrate the effectiveness of our proposed method through extensive experiments, both quantitatively and qualitatively with several widely-used metrics, on multiple datasets. 2 Latent Space of Diffusion Models In this section, we briefly review the latent spaces of diffusion models and illustrate the objective to achieve a better disentangled latent space. 2.1 Latent Space \( X \) of Diffusion Models Given an observed image space, denoted by \( X_0 \), the forward process of diffusion models repeatedly perturbs an image \( x_0 \in X_0 \) by \( x_t = \sqrt{\alpha_t} x_0 + \sqrt{1 - \alpha_t} \epsilon_0 \), with noise \( \epsilon_0 \sim N(0, I) \) for \( t = 1, ..., T \) and \( \alpha_t = \prod_{i=1}^{t} \alpha_i \). These perturbed images \( x_t \) construct a chain of latent spaces for \( t = 1, ..., T \), and the image space at each time step \( t \) is denoted by \( X_t \). For simplicity, we denote \( X_T = X \). To recover the original image \( x_0 \) from \( x_T \), diffusion models train a score model \( s_\theta \) by minimizing the following denoising score matching loss (Vincent [2011], Song et al. [2020b]): $$L_{\text{dsm}} = \mathbb{E}_t \left\{ \lambda(t) \mathbb{E}_{x_0} \mathbb{E}_{x_t | x_0} \left[ \| s_\theta(x_t, t) - \nabla_{x_t} \log p_t(x_t | x_0) \|_2^2 \right] \right\},$$ where $\theta$ is a set of learnable parameters of the score model and $\lambda(t)$ is a positive weighting function. With the trained $s_\theta$, we can generate an image $x_0$ from a sample $x_T \sim \mathcal{N}(0, I)$ through the reverse diffusion process. Here, the distribution of the norm of completely noised images $\|x_T\|_2$ follows a $\chi$-distribution, and they are distributed on the shell of a sphere, not uniformly within the sphere (see Sec. 3.1 for more details). For this reason, linearly interpolating two images within $\mathcal{X}$, as shown in Fig. 1 (top), results in path far from geodesic on the data manifold, while spherical linear interpolation follows a shorter path. As seen in Fig. 1 (mid), however, the spherical linear interpolation is still semantically not disentangled, indicating that $\mathcal{X}_T$ is not isometric to the data manifold. ### 2.2 Intermediate Latent Space $\mathcal{H}$ as a Semantic Space Kwon et al. [2023] claims that the learned intermediate feature space $\mathcal{H}$ of the score model $s_\theta$ sufficiently preserves the semantics of the observed images. They report that a linear scaling by $\Delta h$ on $\mathcal{H}$ controls the magnitude of semantic changes, and applying the same $\Delta h$ on a different sample results in a similar magnitude of effect. This implies that, by minimizing the loss in Eq. (1), $\mathcal{H}$ reasonably learns the low-dimensional data manifold with its geometry preserved and $\mathcal{H}$ is close to isometric to the data manifold. Therefore, we claim that as the mapping from $\mathcal{X}$ to $\mathcal{H}$ becomes closer to isometric, the mapping of the data manifold from $\mathcal{X}$ can also become more isometric. The advantages by achieving this objective is covered in Appendix E. Motivated from these observations, we aim to train the encoder of the score model in a way to ensure isometry. By aligning a spherical trajectory in $\mathcal{X}$ with a geodesic in $\mathcal{H}$, our encoder paves the way for a more coherent utilization of $\mathcal{X}$ as a semantic space. ### 3 Isometric Representation Learning for Diffusion Models The goal of our work is to learn a latent space $\mathcal{X}$ which reflects semantics perceived by human. As this is not straightforward to achieve directly, we rely on a recent observation by Kwon et al. [2023] that the bottleneck layers $\mathcal{H}$ in diffusion models reasonably reflect semantics (Sec. 2.2). Thus, instead of building a semantic latent space from scratch, our approach aims to learn a geodesic-preserving mapping between $\mathcal{X}$ and $\mathcal{H}$. For this, we claim that a scaled isometric mapping (Lee et al. [2021]) guides the encoder of the diffusion model to preserve geodesics between the two spaces (Sec. 3.2), between an approximated spherical latent space $\mathcal{X}'$ (Sec. 3.1) and the semantic latent space $\mathcal{H}$. Fig. 3 illustrates the overall flow of our approach. With stereographic coordinates for $\mathcal{X}$ and Cartesian coordinates for $\mathcal{H}$ as local coordinates, respectively, we equip with an appropriate Riemannian metric to the local coordinate spaces. Then, we guide the encoder of the score model to map from $\mathcal{X}$ to $\mathcal{H}$ so as to preserve geodesic between them. Lastly, we discuss computational considerations (Sec. 3.3). **Illustration.** Before introducing our method, we first illustrate the purpose of isometric representation learning with a toy autoencoder model, learning an encoding map from $S^2$ to $\mathbb{R}^2$. The autoencoder is trained with the reconstruction loss, regularized with the isometric loss in Eq. (6). Figure 3: Illustration of $\mathcal{X}$, $\mathcal{H}$, and local coordinates of those two manifolds. Our isometric loss regularizes the encoder of the score model to map a spherical trajectory in $\mathcal{X}$ to a linear trajectory in $\mathcal{H}$, preserving a geodesic in $\mathcal{X}$ to a geodesic in $\mathcal{H}$. $\Pi_{n-1}$, $\Phi$ are charts mapping from Riemmanian manifolds to local coordinate spaces. $z$, $z'$ denote the local coordinates of $\mathcal{X}$, $\mathcal{H}$, respectively. Fig. 2 illustrates an autoencoder flattening the given $S^2$ manifold in (a) with three different losses. Only with reconstruction loss in (b), we see that the manifold is significantly distorted, points far away in the input often are located closely. We observe less distortion with the isometric loss under the assumption of the Euclidean metric in local coordinates of $S^2$ ($G = I$) in (c), but it still does not preserve geodesic. With our full loss in (d), we may see that the geometry of input space is more preserved with $G = G_{\text{stereographic}}$ from Eq. (3). We provide more illustrations in Appendix B. Recall that the sampling process of diffusion models starts from a Gaussian noise, $x_T \sim \mathcal{N}(0, I_n) \in \mathbb{R}^n$, where $T$ is the number of reverse time steps. Then, the radii of Gaussian noise vectors $x_T$ follow $\chi$-distribution: $r = \sqrt{\sum_{i=1}^{n} x_{T,i}^2} \sim \chi(n)$, whose mean and variance are approximately $\sqrt{n}$ and variance of 1, respectively. For a sufficiently large $n$ (e.g., $n = 3 \times 256^2$ to generate an image of size $256 \times 256$), the noise vectors reside within close proximity of a hypersphere with $r = \sqrt{n}$. ### 3.1 Spherical Approximation of the Latent Space From this observation, we approximate the noise vectors $x \in \mathcal{X}$ (we omit subscripts to be uncluttered) reside on the hypersphere manifold $S^{n-1}(r) = \{x \in \mathbb{R}^n : \|x\| = r\}$. To define a Riemannian metric on $S^{n-1}(r)$, we need to choose charts and local coordinates to represent the Riemannian manifolds (Miranda [1995]). We choose the stereographic coordinates (Apostol [1974]) as the local coordinate to represent $\mathcal{X}$ and $\Phi = \text{id}$ following the linearity argument of $\mathcal{H}$ (Kwon et al. [2023]). Stereographic projection $\Pi_{n-1}: S^{n-1}(r) \setminus \{N\} \to \mathbb{R}^{n-1}$ is a bijective transformation from every point except for the north pole ($N$) on the hypersphere to a plane with north pole as the reference point. $\Pi_{n-1}$ and its inverse projection $\Pi_{n-1}^{-1}$ are given by $$\Pi_{n-1}(x) = \frac{1}{r - x_n}(x_1, x_2, \cdots, x_{n-1}), \quad \Pi_{n-1}^{-1}(z) = \frac{r}{|z|^2 + 1}(2z_1, 2z_2, \cdots, 2z_{n-1}, |z|^2 - 1).$$ In stereographic coordinates, the Riemannian metric of the $S^{n-1}(r)$ (do Carmo [1992]) is given by $$G_{\text{stereographic}}(z) = \frac{4r^4}{(|z|^2 + r^2)^2}I_{n-1}, \quad \forall z \in \mathbb{R}^{n-1}. \quad (3)$$ Recall that a diffusion model consists of a chain of latent spaces. Hence, it is needed to verify at every time step the validity of spherical approximation. From $x_t = \sqrt{\alpha_t}x_0 + \sqrt{1 - \alpha_t}\epsilon_0$, the variance of perturbation kernels is $\text{Var}[p(x_t|x_0)] = 1 - \alpha_t = 1 - e^{-f - \beta(t)dt}$ (Song et al. [2020b]). Figure 4: Scheduling of $\alpha$ We use a linear noise schedule \( \beta_t = \beta_0 (1 - \frac{t}{T}) + \beta_T \frac{t}{T} \) with \( \beta_t = 1 - \alpha_t \), where the variance schedule is illustrated in Fig. 4. We claim that for a sufficiently large \( t \), \( \sqrt{1 - \alpha_t} \approx 1 \) and thus the latent space can be approximated to a sphere. That is, we approximate \( X_t \approx S^{n-1}(r) \) with \( r = \sqrt{1 - \alpha_t} \cdot E[\chi(n)] \approx \sqrt{n(1 - \alpha_t)} \) for \( t > pT \), where we set \( p \in [0, 1] \) as a hyperparameter. ### 3.2 ISOMETRIC MAPPINGS **Definition.** An isometric mapping (or isometry) is a transformation between two metric spaces that globally preserves distances and angles. A mapping between two Riemannian manifolds \( e_\theta : M_1 \rightarrow M_2 \) (\( f \) in local coordinates; \( f = \Phi \circ e'_\theta \circ \Pi_{n-1}^{-1} \)) is a scaled isometry (Lee et al., 2021) if and only if \[ G(z) = c J_f(z)^T H(f(z)) J_f(z), \quad \forall z \in \mathbb{R}^{n-1}, \] where \( c \in \mathbb{R} \) is a constant, \( J_f(z) = \frac{\partial f}{\partial z}(z) \in \mathbb{R}^{(n-1) \times m} \) is the Jacobian of \( f \), \( G(z) \in \mathbb{R}^{(n-1) \times (n-1)} \) and \( H(z') \in \mathbb{R}^{m \times m} \) are the Riemannian metrics defined at the local coordinates \( z, z' \) of \( M_1 = \mathbb{R}^{n-1} \) and \( M_2 = \mathbb{R}^m \), respectively. Equivalently, \( f \) is a scaled isometry if and only if \( J_f^T H J_f G^{-1} = c I \) where \( c \in \mathbb{R} \) is a global constant. If \( c = 1 \) globally, \( f \) is a strict isometry. Scaled isometry allows the constant \( c \) to vary, preserving only the scaled distances and angles. This relaxation makes it easier to optimize a function to preserve geodesic with less restrictions. In our problem formulation, \( M_1 = S^{n-1}(X) \), \( M_2 = \mathbb{R}^m(H) \), and \( H(z') = I_m \), as introduced in Sec. 3.1. Although evaluation of \( J_f^T H J_f G^{-1} \) is coordinate-invariant, our choice of stereographic coordinates is computationally advantageous, as its Riemannian metric in Eq. (3) is proportional to the identity matrix. **Geodesic-preserving Property.** In order for an encoding mapping from \( X \) to \( H \) to respect the semantic structure embedded in the image space, we would like to make this mapping geodesic-preserving. We claim that the scaled isometry leads to a geodesic-preserving mapping \[ \arg \min_{\gamma(t)} \int_0^1 \sqrt{\dot{\gamma}(t)^T G(\gamma(t)) \dot{\gamma}(t)} dt = \arg \min_{\gamma(t)} \int_0^1 \sqrt{\dot{\gamma}(t)^T J(\gamma(t))^T H(f(\gamma(t))) J(\gamma(t)) \dot{\gamma}(t)} dt, \] for an arbitrary trajectory \( \gamma : [0, 1] \rightarrow \mathbb{R}^n \) in local coordinates of \( M_1 \) with fixed endpoints \( (\gamma(0) = x_0, \gamma(1) = x_1) \), where \( x_0, x_1 \in \mathbb{R}^n \) are constant vectors and \( \dot{\gamma}(t) = \frac{d\gamma}{dt}(t) \). **Isometry Loss.** To sum up, we can encourage the mapping from \( X \) to \( H \) to preserve geodesics by regularizing \( R(z) = J_f(z)^T H(f(z)) J_f(z) G^{-1}(z) = c I \), for some \( c \in \mathbb{R} \). It can be achieved by minimizing the following isometry loss: \[ L_{iso}(e_\theta, t) = \frac{\mathbb{E}_{x_t \sim P(x_t)} [\text{Tr}(R^2(z_t))]}{\mathbb{E}_{x_t \sim P(x_t)} [\text{Tr}(R(z_t))]^2} = \frac{\mathbb{E}_{x_t \sim P(x_t)} \mathbb{E}_{v \sim N(0, I)} [v^T R(z_t)^T R(z_t) v]}{\mathbb{E}_{x_t \sim P(x_t)} \mathbb{E}_{v \sim N(0, I)} [v^T R(z_t) v]^2}, \] where \( P(x_t) \) is the noise probability distribution at timestep \( t \), and \( z_t = \Pi_{n-1}(x_t) \). The second equality holds due to the stochastic trace estimator (Hutchinson, 1989), where \( v \in \mathbb{R}^{n-1} \) is a random vector such that \( \mathbb{E}[vv^T] = I \). As a result, our final loss to train the score model is defined by \[ L = L_{dsm} + \lambda_{iso}(p, t)L_{iso}, \] where \( \lambda_{iso}(p, t) \) is a non-negative weighting function to control the relative importance of isometry regularizer for each \( X_t \) and \( p \in [0, 1] \) is the ratio of steps that we do not apply \( L_{iso} \). We use \( \lambda_{iso}(p, t) = \lambda_{iso} 1_{t' > pT}(t' = t) \) where \( 1(\cdot) \) is the indicator function, and the denoising process starts from \( t = T \). **Applying to Diffusion Models.** The isometric loss is not directly applicable to a diffusion model, since it iteratively generates the samples. To guide a geodesic mapping between \( h_T \in H \) and \( x_0 \) (an actual image), we may regularize each step of the iterative sequence; that is, the encoding map between \( x_i \) and \( h_i \) for \( i = 1, ..., T \). Instead of regularizing all steps, we may selectively apply it. For time steps closer to \( T \), samples are closer to a Gaussian, so our assumption may reasonably hold. For time steps closer to 0, however, samples are not sufficiently perturbed yet and thus they follow some intermediate distribution between the Gaussian and the original data distribution as described in Sec. 3.1. Hence, we may not assume these samples lie on \( S^{n-1} \) manifold. 3.3 Computational Considerations To sidestep the heavy computation of full Jacobian matrices, we use stochastic trace estimator to substitute the trace of Jacobian to Jacobian-vector product (JVP). Exploiting the commutativity of the Riemmanian metric in stereographic coordinates, we utilize \( \mathbb{E}_{v \sim \mathcal{N}(0, I)}[v^\top J^\top J G^{-1} v] = \mathbb{E}_{v \sim \mathcal{N}(0, I)}[v^\top \sqrt{G^{-1}} J^\top J \sqrt{G^{-1}} v] \) to reduce the number of JVP evaluations. We provide more details about the computation of stochastic trace estimator in Appendix A.2. 4 Experiments We conduct extensive experiments to verify the effectiveness of our method to diffusion models and corroborate that latent space of diffusion models can be disentangled with isometric loss \( L_{iso} \). 4.1 Experimental Settings Dataset. We evaluate our approach on CIFAR-10, CelebA-HQ (Huang et al., 2018), LSUN-Church (Wang et al., 2017), and LSUN-Bedrooms (Wang et al., 2017). The training partition of each dataset consists of 50,000, 14,342, 126,227, and 3,033,042 samples, respectively. We resize each image to \( 256 \times 256 \) except for CIFAR-10 and horizontally flip it with probability 0.5. Evaluation Metrics. Fréchet inception distance (FID) (Heusel et al., 2017) is a widely-used metric to assess the quality of images created by a generative model by comparing the distribution of generated images with that of ground truth images. Perceptual Path Length (PPL) (Karras et al., 2019) evaluates how well the generator interpolates between points in the latent space, defined as \( \text{PPL} = \mathbb{E}\left[\frac{1}{\tau^2} d(x_t, x_{t+\tau})\right] \), where \( d(\cdot, \cdot) \) is a distance function. We use LPIPS (Zhang et al., 2018) distance using AlexNet (Krizhevsky et al., 2012) for \( d \). A lower PPL indicates that the latent space is better disentangled, since when two or more axes are entangled and geodesic interpolation in \( X \) induces a sub-optimal trajectory in the semantic space, the LPIPS distance gets larger and thereby so does the PPL. For experimentation, we perform 20 and 100 steps of DDIM sampling for FID and PPL, computed with 10,000 and 50,000 images, respectively. Linear separability (LS) (Karras et al., 2019) measures the degree of disentanglement of a latent space, by measuring how much the latent space is separable by a hyperplane. Mean condition number (MCN) and variance of Riemannian metric (VoR) measure how much a mapping is close to a scaled-isometry, proposed by Lee et al. (2021). We provide further details on these metrics in Appendix D. We additionally design a new metric called mean Relative Trajectory Length (mRTL), measuring the extent to which a trajectory in \( X \) is mapped to geodesic in \( H \). Specifically, mRTL is defined as the mean ratio between the L2 distance \( d_2(t) \) between \( h, h' \in H \) features corresponding to two latents \( x, x' \in X \) and another distance measured on the manifold \( d_M(t) \), following along a path on \( \{H_t\} \). That is, \( \text{RTL}(t) = \mathbb{E}_{x, x' \in X}[d_M(t)/d_2(t)] \) and \( \text{mRTL} = \mathbb{E}_t[\text{RTL}(t)] \), where \( t \) denotes the timesteps of the sampling schedule. Intuitively, it represents the degree of isometry of the encoder \( f \). Implementation Details. Our network architecture follows the backbone of DDPM (Ho et al., 2020), which uses a U-Net (Ronneberger et al., 2015) internally. We take a DDPM (Ho et al., 2020) pre-trained on CelebA (Liu et al., 2015) as a starting point, and further train it with each competing method until it achieves the lowest FID. If not specified, we train with batch size 32, learning rate \( 10^{-4} \), \( p = 0.5 \), and \( \lambda_{iso} = 10^{-4} \) for 10 epochs by default. We use Adam optimizer and exponential moving average (Brown, 1956) on model parameters with a decay factor of 0.9999. We set the number of inference steps to 100. We use 4 NVIDIA A100 GPUs with 40GB memory. 4.2 Quantitative Comparison Overall Comparison. In Tab. 1, 2, we quantitatively compares the performance of our method and DDPM (Base) in various metrics. The results indicate that the diffusion models trained with our isometric loss regularizer exhibit substantial drop (improvement) in PPL implying smoother transitions during latent traversals. Decrease of mRTL, MCN, and VoR signified the encoder of score model became successfully closer to scaled-isometry. For CelebA-HQ, LS and LS measured by SVM with radial basis function kernel significantly decreased, indicating the disentanglement of | Dataset | FID-10k↓ | PPL-50k↓ | mRTL↓ | MCN↓ | VoR↓ | |-----------------|----------|----------|-------|------|------| | CIFAR-10 | 10.27 | 12.50 | 105 | 76 | 2.03 | 1.92 | 155 | 107 | 0.50 | 0.57 | | CelebA-HQ | 15.89 | 16.18 | 648 | 570 | 2.67 | 2.50 | 497 | 180 | 1.42 | 0.85 | | LSUN-Church | 10.56 | 13.01 | 2028 | 1587 | 3.71 | 3.21 | 375 | 217 | 1.92 | 1.37 | | LSUN-Bedrooms | 9.49 | 11.95 | 4515 | 3809 | 3.38 | 3.21 | 320 | 186 | 1.69 | 1.12 | Table 1: **Quantitative comparison.** Diffusion models trained with our isometric loss achieve consistent improvement over the baseline on multiple datasets, with slight sacrifice in FID scores. | Dataset | LS ↓ | LS (radial) ↓ | |-----------------|------|---------------| | Base | Ours | Base | Ours | | CelebA-HQ | 4.39 | 2.65 | 12.3 | 6.8 | Table 2: **Quantitative comparison of linear separability (LS).** LS measures the disentanglement of latent space. This further implies better alignment between the latent space and semantic space, disentangling semantic components in the latent space, as desired. We notice a trade-off between FID and other metrics. Using our isometry loss, PPL and mRTL significantly drop, while FID sometimes marginally increases. In spite of slightly increased FID, however, the quality of the generated images is not significantly damaged, e.g., as seen in examples in Fig. V. With the improved PPL and mRTL, however, latent traversal gets smoother without abrupt changes, easing controlled image manipulation (see Sec. 4.3 for more details). **Mean Relative Trajectory Length.** Fig. 5 shows the measured Relative Trajectory Length (RTL) scores across the reverse timesteps in DDIM ($T = 20$). As the guidance of isometric loss gets larger with a larger $\lambda_{iso}$, the RTL tends to decrease, indicating the geodesic in $X$ (slerp) maps to geodesic in $\{\mathcal{H}_t\}$. We notice a significant drop when $t \leq 10$ especially with a larger $\lambda_{iso}$, where the isometric loss is applied. This indeed shows the isometric loss is accurately guiding the encoder of the score model to learn an isometric representation. ### 4.3 Analysis on the Disentanglement of Latent Space $X$ **Interpolation.** We first conduct traversals on the latent space $X$ between two points $x, x' \in X$, illustrating the generated images from interpolated points between them in Fig. 6. We observe that with our isometric loss the latent space is better disentangled, resulting in smoother transitions without abrupt changes in gender. More examples are provided in Fig. VII-VIII in Appendix H. **Linearity.** We also claim that the latent space $X$ learned with our isometric loss has a property of linearity. Specifically, we compare the generated images with ours to baseline. Both cases are naively moved along the slerp in their latent spaces. We illustrate this in Fig. 7 by demonstrating that a spherical perturbation on $X$ with various intensity of $\Delta x$ adds or removes specific attributes from the generated images accordingly. We find the editable direction by employing Local Basis (Jang et al., 2022), an unsupervised method for identifying semantic-factorizing directions in the latent space based on its local geometry, and perturb the latents through this direction both for baseline and our model. This method discovers the principal variations of the latent space in the neighborhood of the base latent code. As seen in Fig. 7, the baseline often changes multiple factors (age, gender) abruptly and inconsistently with $\gamma$ (e.g., when $\gamma = -1$ on the right example, it suddenly shows a male-like output), while ours show smoother changes. With previous diffusion-based image editing methods, one needed to take into account the geometry of $\mathcal{H}$ for every step in the editing trajectory (Park et al., 2023b). This requires computation of the Jacobian and its eigenvectors at every step forward in the trajectory via parallel transport along $\mathcal{H}$. This is usually approximated via a projection, referred as geodesic shooting. Using our isometric loss, on the other hand, the editing trajectory becomes closer to the trivial geodesic of the latent Figure 6: Examples of latent traversal between two images $x$ and $x'$ with DDPM \cite{Ho et al., 2020}, trained on $256 \times 256$ CelebA-HQ. We observe unnecessary changes of female $\rightarrow$ male in the baseline, while smoother transitions in ours. For quantitative support, we plot LPIPS distance between each adjacent frames (Blue: Baseline, Orange: Ours). Figure 7: **Linearity.** Images generated from a latent vector $x$ (corresponds to the boxed columns) and from slightly perturbed ones, $x + \gamma \Delta x$ with $\gamma \in \{-2, -1, 0, 1, 2\}$, where $\Delta x$ corresponds to the age axis. space; slerp in $\mathcal{X}$. Thus, we can directly move along the slerp in $\mathcal{X}$ without requiring any additional computations or approximations to find the editing direction of image. ### 4.4 Ablation Study Tab. [3] shows the ablation study on the choice of optimal $p$ and $G$. With $p = 0.5$ and $G = G_{\text{stereographic}}$, we observe the best performance in FID and PPL. FID increases with $p < 0.5$, while PPL improvement gets marginal when $p > 0.5$. Also, when calculating the isometric loss, using an appropriate Riemannian metric $G$ of the latent space turns out to be important. That is, the model with $G = G_{\text{stereographic}}$ achieves competitive FID and PPL scores at the same time, while either of them gets significantly worse with $G = I$. This result supports our spherical assumption on the latent space $\mathcal{X}$ of diffusion models and modeling it as a Riemannian manifold $S^{n-1}$ is indeed reasonable. | $p$ | $G$ | $\lambda_{\text{iso}}$ | FID-10k ↓ | PPL-50k ↓ | |-----|-----|----------------|-----------|-----------| | 1 | - | - | 15.89 | 653 | | 0 | I | $10^{-4}$ | 24.07 | 447 | | 0.5 | I | $10^{-3}$ | 30.28 | 441 | | 0.5 | I | $10^{-4}$ | 16.60 | 619 | | 0.5 | $G_{\text{stereographic}}$ | $10^{-4}$ | 16.18 | 570 | Table 3: **Ablation study** on $p$ (the ratio of steps to skip isometric loss) and $G$ (the choice of Riemannian metric). This experiment has been conducted on CelebA-HQ $256 \times 256$. ## 5 RELATED WORKS ### Diffusion models. Recently, diffusion models ([Sohl-Dickstein et al., 2015](#), [Song & Ermon, 2019](#), [Song et al., 2020b](#)) have achieved a great success in eclectic fields, containing image generation ([Dhariwal & Nichol, 2021](#), [Baranchuk et al., 2021](#), [Choi et al., 2021b](#), [Sehwag et al., 2022](#), [Meng et al., 2023](#)), image synthesis ([Meng et al., 2021](#), [Tumanyan et al., 2023](#), [Liu et al., 2023](#)), video generation ([Ho et al., 2022](#), [Blattmann et al., 2023](#)) and sound generation ([Yang et al., 2023](#)). From a pure Gaussian noise, DDPM ([Ho et al., 2020](#)) samples the image by predicting the next distribution using Markov chain property. With non-Markovian process, DDIM ([Song et al., 2020a](#)) accelerates the denoising process of DDPM by skipping sampling steps. ### Latent Space of Generative Models. On traditional Generative Adversarial Networks (GANs) ([Goodfellow et al., 2014](#), [Radford et al., 2015](#), [Zhu et al., 2017](#), [Choi et al., 2018](#), [Ramesh et al., 2018](#), [Härkönen et al., 2020](#), [Abdal et al., 2021](#)) models, StyleGAN ([Karras et al., 2019](#)) is a pioneering work on latent space analysis and improvement. In StyleGANv2 ([Karras et al., 2020](#)), a path length regularizer guides the generator to learn an isometric mapping from the latent space to the image space. Recently, additional studies on GANs ([Shen et al., 2020a,b](#), [Shen & Zhou, 2021](#)) and VAEs ([Hadjeres et al., 2017](#), [Zheng & Sun, 2019](#), [Zhou & Wei, 2020](#)) have examined the latent spaces of generative models. [Kwon et al., 2023](#) found that the internal feature space of U-Net in diffusion models, $\mathcal{H}$, plays the same role as a semantic latent space. [Preechakul et al., 2022](#) discovered that using a semantic encoder enables the access to the semantic space of diffusion models. However, this method utilizes conditional diffusion model, while our work proposes a method that can directly utilize the latent space without any condition. ### Isometric Latent Space for Generative Models. There exist some previous works on utilizing Riemannian geometry to understand the latent spaces. ([Arvanitidis et al., 2021](#)) claimed understanding Riemannian geometry of latent space can improve analysis of representations as well as generative modeling. ([Chen et al., 2020](#)) proposed that interpreting the latent space as Riemannian manifold and regularizing the Riemannian metric to be a scaled identity help VAEs learn a good latent representation. ([Lee et al., 2021](#)) proposed an isometric regularization method for geometry-preserving latent space coordinates in scale-free and coordinate invariant form. However, due to the iterative property of diffusion models, unlike VAEs and GANs, it is demanding to apply isometric representation learning on diffusion models. Thus, to the best of our knowledge, no previous works have been done on applying an isometric mapping to the semantic space of diffusion models. ## 6 SUMMARY AND LIMITATIONS In this paper, we have addressed a critical issue in the field of generative models, specifically unconditional diffusion models. In spite of their advances in generating photorealistic samples, they have lagged behind in terms of understanding and controlling their latent spaces. The proposed approach, **Isometric Diffusion**, leverages isometric representation learning to bridge the gap between the latent space $\mathcal{X}$ and the data manifold. With a mapping from latent space to data manifold being close to isometry learned by our approach, we demonstrate that a more intuitive and disentangled latent space for diffusion models can be achieved both quantitatively and qualitatively. ### Limitations. Our proposed method is applicable primarily in noise spaces close to a Gaussian distribution, limiting its applicability. Overcoming this limitation would be an interesting direction for future work. ETHICS STATEMENT The proposed approach in this paper aims to ease the image or video editing, selectively adjusting certain aspects of them as intended. Our work shares ethical issues of generative models that are currently known in research community; to name some, deep fake, fake news, malicious editing to manipulate evidence, and so on. We believe our work does not significantly worsen these concerns in general, but a better disentangled latent semantic space with our approach might ease these abuse cases as well. Also, other relevant ethical issues regarding potential discrimination caused by a biased dataset still remain the same with our approach, neither improving nor worsening ethical concerns in this aspect. A collective effort within the entire research community and society will be important to keep generative models beneficial. REPRODUCIBILITY STATEMENT We submit our code used for experiments in this paper as a supplementary material. We also plan to publicly release this upon acceptance. The readers would be able to reproduce the reported results by running this code. We also describe the detailed experimental settings including hyperparameters and hardware environments we use in Sec. 4.1 and 4.4. REFERENCES Rameen Abdal, Peihao Zhu, Niloy J Mitra, and Peter Wonka. StyleFlow: Attribute-conditioned exploration of stylegan-generated images using conditional continuous normalizing flows. ACM Transactions on Graphics (ToG), 40(3):1–21, 2021. T.M. Apostol. Mathematical Analysis. Addison-Wesley series in mathematics. Addison-Wesley, 1974. ISBN 9780201002881. Georgios Arvanitidis, Lars Kai Hansen, and Søren Hauberg. Latent space oddity: on the curvature of deep generative models, 2021. Dmitry Baranchuk, Ivan Rubachev, Andrey Voynov, Valentin Khrulkov, and Artem Babenko. Label-efficient semantic segmentation with diffusion models. arXiv:2112.03126, 2021. Andreas Blattmann, Robin Rombach, Huan Ling, Tim Dockhorn, Seung Wook Kim, Sanja Fidler, and Karsten Kreis. Align your latents: High-resolution video synthesis with latent diffusion models. In CVPR, 2023. Robert G. Brown. Exponential smoothing for predicting demand, 1956. Nutan Chen, Alexej Klushyn, Francesco Ferroni, Justin Bayer, and Patrick van der Smagt. Learning flat latent manifolds with vaes. arXiv:2002.04881, 2020. Jaewoong Choi, Junho Lee, Changyeon Yoon, Jung Ho Park, Geonho Hwang, and Myungjoo Kang. Do not escape from the manifold: Discovering the local coordinates on the latent space of gans. arXiv:2106.06959, 2021a. Jooyoung Choi, Sungwon Kim, Yonghyun Jeong, Youngjune Gwon, and Sungroh Yoon. Ilvr: Conditioning method for denoising diffusion probabilistic models. arXiv:2108.02938, 2021b. Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation. In CVPR, 2018. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. NIPS, 34, 2021. M.P. do Carmo. Riemannian Geometry. Mathematics (Birkhäuser) theory. Birkhäuser Boston, 1992. ISBN 9780817634902. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. NIPS, 27, 2014.
vQqJJzL2Jf
Hence, the difference in the PDE behaviour may also come from how long a certain trajectory is followed, not intrinsically from what kind of equation it is. Then, a user of the method may feel safe to extrapolate a given PDF that has seemed to be
Understanding and Mitigating Extrapolation Failures in Physics-Informed Neural Networks Anonymous authors Paper under double-blind review Abstract Physics-informed Neural Networks (PINNs) have recently gained popularity due to their effective approximation of partial differential equations (PDEs) using deep neural networks (DNNs). However, their out of domain behavior is not well understood, with previous work speculating that the presence of high frequency components in the solution function might be to blame for poor extrapolation performance. In this paper, we study the extrapolation behavior of PINNs on a representative set of PDEs of different types, including high-dimensional PDEs. We find that failure to extrapolate is not caused by high frequencies in the solution function, but rather by shifts in the support of the Fourier spectrum over time. We term these spectral shifts and quantify them by introducing a Weighted Wasserstein-Fourier distance (WWF). We show that the WWF can be used to predict PINN extrapolation performance, and that in the absence of significant spectral shifts, PINN predictions stay close to the true solution even in extrapolation. Finally, we propose a transfer learning-based strategy to mitigate the effects of larger spectral shifts, which decreases extrapolation errors by up to 82%. 1 Introduction Understanding the dynamics of complex physical processes is crucial in many applications in science and engineering. Oftentimes, these dynamics are modeled as partial differential equations (PDEs) that depend on time. In the PDE setting, we want to find a solution function \( u(x,t) \) that satisfies a given governing equation of the form \[ f(x,t) := u_t + N(u) = 0, \quad x \in \Omega, \quad t \in [0,T] \] where \( u_t := \frac{\partial u}{\partial t} \) denotes the partial derivative of \( u \) with respect to time, \( N \) is a generally nonlinear differential operator, \( \Omega \subset \mathbb{R}^d \), with \( d \in \{1,2,3\} \) is a spatial domain, and \( T \) is the final time for which we’re interested in the solution. Moreover, we impose an initial condition \( u(x,0) = u^0(x), \forall x \in \Omega \) on \( u(x,t) \), as well as a set of boundary conditions. Together, these conditions specify the behaviors of the solution on the boundaries of the spatio-temporal domain. Following the recent progress in deep learning, physics-informed neural networks (PINNs) as introduced in Raissi et al. (2019) have garnered attention because of their simple, but effective way of approximating time-dependent PDEs with deep neural networks. PINNs preserve important physical properties described by the governing equations by parameterizing the solution and the governing equation simultaneously with a set of shared network parameters. After the success of the seminal paper Raissi et al. (2019), many sequels have applied PINNs to solve various PDE applications, e.g. Anitescu et al. (2019); Yang et al. (2021); Zhang et al. (2018); Doan et al. (2019). Physics-informed loss terms have also proven useful in machine learning more generally Davini et al.; Cai et al. (2021). Related work. Most previous studies using the standard PINNs introduced in Raissi et al. (2019) have demonstrated the performances of their methods in interpolation only, i.e. on a set of testing points sampled within the same temporal range that the network was trained on. We refer to points sampled beyond the final time of the training domain as extrapolation. In principle, standard PINNs are expected to be able to learn the dynamics in Eq. (1) and, consequently, to approximate \( u(x,t) \) accurately in extrapolation. However, previous work in Kim et al. (2020) and Bonfanti et al. (2023) has shown that this is not the case: PINNs can deviate significantly from the true solution once they are evaluated in an extrapolation setting, calling into question their capability as a tool for learning the dynamics of physical processes. From a foundational standpoint, studying extrapolation can therefore give us insights into the limitations of PINNs more generally. From a practical standpoint, constantly retraining PINNs from scratch when faced with a point that is outside their initial training domain is undesirable (Bonfanti et al. [2023]; Zhu et al. [2022]), so anticipating whether their predictions remain accurate is crucial. Several recent papers have recognized the importance of the extrapolation problem in PINNs (Kapoor et al. [2023]; Bonfanti et al. [2023]; Cuomo et al. [2022]; Kim et al. [2020]), and at least two have proposed methods to address it (Kim et al. [2020]; Kapoor et al. [2023]). However, even a basic characterization of extrapolation behavior for PINNs trained to solve time-dependent PDEs is still absent from the literature. Previous works consider standard PINNs incapable of extrapolating beyond the training domain and suspect implicit biases in deep neural networks to lead to the learned solution becoming smooth or flat in extrapolation, thus implying that the presence of high frequencies in the solution function might lead to extrapolation failures (Bonfanti et al. [2023]). Finally, there are to the best of our knowledge no theoretical works on the extrapolation capabilities of PINNs. Previous works have focused on PINN generalization in interpolation only (Mishra and Molinaro [2022]). Contributions. In this paper, our contributions are therefore as follows. (i) We show that PINNs are capable of almost perfect extrapolation behavior for certain PDEs. (ii) We characterize these PDEs by analyzing the Fourier spectra of their solution functions and argue that standard PINNs generally fail to anticipate shifts in the support of the Fourier spectrum over time. We quantify these spectral shifts using the Wasserstein-Fourier distance. (iii) We clarify that unlike with training failures in interpolation, the presence of high frequencies alone is not to blame for the poor extrapolation behavior of PINNs on some PDEs. (iv) We show that these insights generalize to high-dimensional PDEs, and (v) we demonstrate the transfer learning on a set of similar PDEs can reduce extrapolation errors significantly when spectral shifts are present. The structure of the paper is as follows: in section 2, we formally introduce PINNs and define what we mean by interpolation and extrapolation. Section 3 characterizes the PDEs for which good extrapolation accuracy is possible using the Fourier spectra of their solution functions and introduce the Weighted Wasserstein-Fourier distance. In section 4, we investigate the viability of transfer learning approaches in improving extrapolation. Section 5 discusses our results and concludes. 2 BACKGROUND AND DEFINITIONS Physics-Informed Neural Networks. As mentioned in the previous section, PINNs parameterize both the solution \( u \) and the governing equation \( f \). Denote the neural network approximating the solution \( u(x,t) \) by \( \tilde{u}(x,t;\theta) \) and let \( \theta \) be the network’s weights. Then the governing equation \( f \) is approximated by a neural network \( \tilde{f}(x,t,\tilde{u};\theta) := \tilde{u}_t + N(\tilde{u}(x,t;\theta)) \). The partial derivatives here can be obtained via automatic differentiation. We note that \( \tilde{f}(x,t,\tilde{u};\theta) \) shares its network weights with \( \tilde{u}(x,t;\theta) \). The name “physics-informed” neural network comes from the fact that the physical laws we’re interested in are enforced by applying an extra, problem-specific, nonlinear activation, which is defined by the PDE in Eq. (1) (i.e., \( \tilde{u}_t + N(u) \)). We learn the shared network weights using a loss function consisting of two terms, which are associated with approximation errors in \( \tilde{u} \) and \( \tilde{f} \), respectively. Raissi et al. [2019] considers a loss of the form \( L := \alpha L_u + \beta L_f \), where \( \alpha, \beta \in \mathbb{R} \) are coefficients and \( L_u \) and \( L_f \) are defined as follows: \[ L_u = \frac{1}{N_u} \sum_{i=1}^{N_u} \left| u(x^i_u, t^i_u) - \tilde{u}(x^i_u, t^i_u; \theta) \right|^2 ; \quad L_f = \frac{1}{N_f} \sum_{i=1}^{N_f} \left| \tilde{f}(x^i_f, t^i_f, \tilde{u}; \theta) \right|^2 \] \( L_u \) enforces the initial and boundary conditions using a set of training data \( \{(x^i_u, t^i_u), u(x^i_u, t^i_u)\}_{i=1}^{N_u} \). The first element of the tuple is the input to the neural network \( \tilde{u} \) and the second element is the ground truth that the output of \( \tilde{u} \) attempts to match. We can collect this data from the specified initial and boundary conditions since we know them a priori. Meanwhile, \( L_f \) minimizes the discrepancy between the governing equation \( f \) and the neural network’s approximation \( \tilde{f} \). We evaluate the network at collocation points \( \{ (x^i_f, t^i_f), f(x^i_f, t^i_f) \}_{i=1}^{N_f} \). Note that here, the ground truth \( \{ f(x^i_u, t^i_u) \}_{i=1}^{N_f} \) consists of all zeros. We also refer to \( \frac{1}{N_f} \sum_{i=1}^{N_f} |f(x^i_f, t^i_f; \hat{u}; \theta)| \) as the mean absolute residual (MAR): its value denotes how far the network is away from satisfying the governing equation. Note that using this loss, i) no costly evaluations of the solutions \( u(x, t) \) at collocation points are required to gather training data, ii) initial and boundary conditions are enforced using a training dataset that can easily be generated, and iii) the physical law encoded in the governing equation \( f \) in Eq. (1) is enforced by minimizing \( L_f \). In the original paper by Raissi et al. (2019), both loss terms have equal weight, i.e. \( \alpha = \beta = 1 \), and the combined loss term \( L \) is minimized. **Interpolation and extrapolation.** For the rest of this paper, we refer to points \( (x^i, t^i) \) as interpolation points if \( t^i \in [0, T_{\text{train}}] \), and as extrapolation points if \( t^i \in (T_{\text{train}}, T_{\text{max}}] \) for \( T_{\text{max}} > T_{\text{train}} \). We are primarily interested in the \( L^2 \) error of the learned solution, i.e. in \( \| u(x^i, t^i) - \hat{u}(x^i, t^i; \theta) \|_2 \), and in the \( L^2 \) relative error, which is the \( L^2 \) error divided by the norm of the function value at that point, i.e. \( \| u(x^i, t^i) \|_2 \). When we sample evaluation points from the extrapolation domain, we refer to the \( L^2 \) (relative) error as the (relative) extrapolation error. Similarly, we are interested in the (mean) absolute residual as defined above, i.e. in \( |f(x^i, t^i; \hat{u}; \theta)| \). For points sampled from the extrapolation domain, we refer to this as the extrapolation residual. In this paper, we are interested in the extrapolation performance of PINNs, by which we broadly mean the following questions: how quickly does the performance of a PINN deteriorate as we move away from the interpolation domain? What aspects of the model or underlying PDE affect this? When we speak of “near perfect” extrapolation, we therefore always mean the accuracy of the model on a bounded extrapolation domain, usually neighboring the interpolation domain. This is in line with Kim et al. (2020) and distinct from the question whether MLPs more generally can extrapolate to arbitrary domains Haley and Soloway (1992); Cardell et al. (1994); Ziyin et al. (2020). **PDEs considered.** We investigate the extrapolation capabilities of PINNs on a representative set of 7 PDEs, all of which are widely used as examples in the PINN literature Basir (2022); Raissi et al. (2019); Penwarden et al. (2023); Jagtap and Karniadakis (2021). These include the Allen-Cahn equation, the viscous Burger’s equation, a heat equation, a diffusion equation, a diffusion-reaction equation, the Beltrami flow, and the non-linear Schrodinger equation. Details on all PDEs considered can be found in Appendix A.1. ### 3 Understanding Extrapolation Failures via Spectral Shifts #### 3.1 Effects of Model Size, Activation Functions, & Number of Training Samples Before we begin our investigation of what determines extrapolation performance in PINNs, we identify several aspects of a model that do not have an effect. This will make our analysis in the second half of this section easier. To this end, we analyze the extrapolation errors and residuals which standard PINNs display for the Allen-Cahn equation, the viscous Burgers equation, a diffusion equation, and a diffusion-reaction equation. ![Figure 1](image-url) (a) \( L^2 \) relative extrapolation error of MLP(5, 64) with tanh activation, trained on \([0, 0.5]\). (b) MAR for the same MLP. PINN extrapolation performance depends on the underlying PDE. For each of the four PDEs introduced above, we train a 5-layer MLP with 64 neurons per layer and tanh activation on the interpolation domains specified for 50000 epochs using the adam optimizer. As seen in Figure 1, observe that the $L^2$ relative errors for the Burgers’ equation and for the Allen-Cahn equation become significantly larger than for the diffusion and diffusion-reaction equations when we move from $t = 0.5$ to $t = 1$. The solution learned for the diffusion-reaction equation disagrees only minimally with the true solution, even at $t = 1$, which shows that for this particular PDE, PINNs can extrapolate almost perfectly well. More detailed results can be found in Appendix A.2. Extrapolation performance is generally independent of model parameters. While we observe drastically different extrapolation behaviors depending on the underlying PDE as mentioned above, the extrapolation for a given PDE seems to be more or less independent of model parameters, such as number of layers or neurons per layer, activation function, number of samples, or training time. Once the chosen parameters allow the model to achieve a low error in the interpolation domain - $1e^{-5}$ is a value commonly used for this in the literature [Raissi et al. (2019); Chen et al. (2023); Wang et al. (2022); Han and Lee (2021)] - adding more layers, neurons, or samples, or alternatively training longer does not seem to have an effect on the extrapolation error and MAR. These results allow us to focus our further analyses on a single architecture. Unless otherwise stated, we use an MLP with 5 layers with 64 neurons each and tanh activation, initialized with the commonly used Xavier normal initialization, and trained for 50000 epochs using Adam. 3.2 Extrapolation in the presence of high frequencies Recent literature has found that neural networks tend to be biased towards low-complexity solutions due to implicit regularization inherent in their gradient descent learning processes [Neyshabur et al. (2014); Neyshabur (2017)]. In particular, deep neural networks have been found to possess an inductive bias towards learning lower frequency functions, a phenomenon termed the spectral bias of neural networks [Rahaman et al. (2019); Cao et al. (2019)], which for example [Bonfanti et al. (2023)] suspect to be related to extrapolation failures in PINNs. They find evidence for this when considering time-independent PDEs. Figure 2: For times $t = 0.25$ (top, interpolation) and $t = 0.99$ (bottom, extrapolation), we plot the reference and predicted solutions in the spatio-temporal (left) and Fourier (middle) domains for the Burgers’ equation. The absolute difference in the Fourier spectra is plotted on the right. Following this hypothesis, we would expect most of the extrapolation error to come from the higher frequencies: the predicted function might become smooth or flat in extrapolation, similar to what has been observed with training failures in interpolation [Basir (2022)]. We plot both the reference solution and the predicted solution in the Fourier domain for all four of our PDEs, as well as the absolute difference between the two Fourier spectra of the reference and predicted solution. Plots for the Burgers’ equation are provided in Figure 2 while plots for the other PDEs are provided in Appendix A.3. **High frequencies only account for a small fraction of extrapolation errors.** In all cases, the majority of the error in the Fourier domain is concentrated in the lower-frequency regions. While this is partially due to the fact that the low frequency components of the solutions have larger magnitude, it suggests that in extrapolation, PINNs fail even to learn the low frequency parts of the solution. Thus, the presence of high frequencies alone fails to explain the extrapolation failure of PINNs. We provide some additional evidence for this by studying the extrapolation behavior of Multi-scale Fourier feature networks [Wang et al. (2020)] in Appendix A.6. Even though these architectures were designed specifically to make learning higher frequencies easier, we find their extrapolation error to be at least as large or larger than that of standard PINNs. **PINNs can extrapolate well in the presence of high frequencies.** To isolate the effect that the presence of high frequencies alone has on extrapolation performance, we consider the following variation of the Diffusion-Reaction for \( x \in [-\pi, \pi] \) and \( t \in [0, 1] \). \[ \frac{\partial u}{\partial t} = \frac{\partial^2 u}{\partial x^2} + e^{-t} \left( \sum_{j=1}^{K} \frac{(j^2 - 1)}{j} \sin(jx) \right) \] \[ u(x, 0) = \sum_{j=1}^{K} \frac{\sin(jx)}{j}, \quad u(-\pi, t) = u(\pi, t) = 0 \] The reference solution is given by \( u(x, t) = e^{-t} \left( \sum_{j=1}^{K} \frac{\sin(jx)}{j} \right) \). As with our other experiments, we use \( t \in [0, 0.5] \) as the temporal training domain and consider \( t \in (0.5, 1] \) as the extrapolation area. \( K \) here is a hyperparameter that controls the size of the spectrum of the solution. Note that for a fixed \( K \), the support of the Fourier spectrum of the reference solution never changes over time, with only the amplitudes of each component scaled down by an identical constant factor. ![Figure 3](image) **Figure 3:** Mean \( L^2 \) relative interpolation and extrapolation errors, trained on \([0, 0.5]\). In (a), we plot this against the size of the spectrum i.e. the parameter \( K \) in Equation (4), and in (b) we plot this against the speed of the decay of the amplitudes, i.e. the parameter \( M \) in Equation (6). For various values of \( K \), we find that our trained PINNs are able to extrapolate well as can be seen in Figure 3(a). For the sake of completeness, we also investigate the effect of the speed of decay of the amplitudes in the Fourier spectra. We train a PINN on the following variation of the Diffusion-Reaction equation. \[ \frac{\partial u}{\partial t} = \frac{\partial^2 u}{\partial x^2} + e^{-Mt} \left( \sum_{j \in \{1, 2, 3, 4, 8\}} \frac{(j^2 - 1)}{j} \sin(jx) \right) \] for \( x \in [-\pi, \pi] \) and \( t \in [0, 1] \) with the initial condition \( u(x, 0) = \sin(x) + \frac{\sin(2x)}{2} + \frac{\sin(3x)}{3} + \frac{\sin(4x)}{4} + \frac{\sin(8x)}{8} \) and the Dirichlet boundary condition \( u(-\pi, t) = u(\pi, t) = 0 \). The reference solution is \[ u(x, t) = e^{-Mt} \left( \sin(x) + \frac{\sin(2x)}{2} + \frac{\sin(3x)}{3} + \frac{\sin(4x)}{4} + \frac{\sin(8x)}{8} \right) \] with the same interpolation and extrapolation areas as before. Figure 3(b) shows the relative interpolation and extrapolation errors against increasing values of \( M \). We find that an increase in the speed of the exponential decay seems to increase the extrapolation error more than an increase in the size of the spectrum. ### 3.3 Spectral shifts While the solutions to the Allen-Cahn equation and to the Burger’s equation do not exhibit exponentially fast changes in their amplitudes, they have Fourier spectra whose support shifts over time, unlike the diffusion and diffusion-reaction equations. We argue that PINNs struggle to extrapolate well when these spectral shifts in the true solution’s Fourier spectrum are large. **Weighted Wasserstein-Fourier distance.** To quantify the temporal shifts in the support of the Fourier spectrum, we introduce the *Weighted Wasserstein-Fourier Distance* (WWF) between the normalized Fourier spectra of the PDE solution in two disjoint time domains. The Weighted Wasserstein-Fourier Distance is based on the Wasserstein-Fourier distance, which compares the Fourier spectra of the solution function at two different points in time. Consider two discrete CDFs \( F_1, F_2 \) supported on the domain \( X \). The Wasserstein distance between \( F_1 \) and \( F_2 \) is defined as \[ W(F_1, F_2) = \sum_{x \in X} |F_1(x) - F_2(x)| \] Given two discrete Fourier spectra \( f_1, f_2 \), the Wasserstein-Fourier distance \( W(f_1, f_2) \) can be computed as \[ W(f_1, f_2) = \frac{f_1}{\|f_1\|_1}, \frac{f_2}{\|f_2\|_1} \] We now define the Weighted Wasserstein-Fourier Distance given a function \( f \) as \[ WWF(f) := \sum_{s \in I} \sum_{t \in E} (T_{\text{max}} + s - t) W \left( \frac{f_s}{\|f_s\|_1}, \frac{f_t}{\|f_t\|_1} \right) \] where \( I \) and \( E \) are the interpolation and extrapolation domains, respectively. We present plots of the pairwise Wasserstein-Fourier distances for each \( t_1, t_2 \in [0, T_{\text{max}}] \) in Appendix A.4. The Wasserstein-Fourier distance of the true solution is zero everywhere for both the diffusion and diffusion-reaction equations, leading to a Weighted Wasserstein-Fourier Distance of zero, which reflects the constant support of the spectra. In contrast, the pairwise distance matrices for the Burgers’ and Allen-Cahn equations exhibit a block-like structure, with times in disjoint blocks exhibiting pronouncedly different distributions in the amplitudes of their respective Fourier spectra. These shifts are not captured by the learned solution, leading to large \( L^2 \) errors. ![Burgers’ Equation Extrapolation vs Spectral Shift](image) (a) Burgers’ Equation ![Allen-Cahn Extrapolation vs Spectral Shift](image) (b) Allen-Cahn Equation Figure 4: For both Burgers’ Equation (a) and the Allen-Cahn equation (b), we train 50 PINNs on a variety of different PDE parameters for each equation. More extreme spectral shifts in the underlying solution are correlated with poorer extrapolation performance. The Weighted Wasserstein-Fourier distance allows us to capture the effects that other properties of the underlying PDE have on extrapolation performance. To illustrate this, we train PINNs for 50 different Burgers’ equations, each with a different viscosity parameter $\nu$ – equally spaced from 0.001 to 0.1, and for 50 variants of the Allen-Cahn equation with varying values of $d$, equally spaced from 0.0001 to 0.1. We find that different PDE coefficients lead to large differences in extrapolation performance and that this relationship is moderated quite heavily through shifts in the underlying Fourier spectra. Figure 4 plots the WWF distance between the spectra against the relative $L^2$ error in extrapolation. PDE coefficients that induce larger shifts in the spectra correspond to overall worse extrapolation performance. 3.4 HIGHER-DIMENSIONAL AND MORE COMPLEX PDES Our findings so far demonstrate that the extrapolation performance of PINNs depends heavily on the presence of spectral shifts in the underlying PDE. We conclude this section by showing that this remains true for higher-dimensional and more complex PDEs. To this end, we train PINNs on the Beltrami Flow and the non-linear Schrodinger equation. The reference solution to the non-linear Schrodinger equation exhibits significant shifts in the spectra, with a WWF Distance distance between the interpolation and extrapolation domains of 0.034 and 0.036 in the real and imaginary domain respectively. Based on our results for lower-dimensional PDEs, we expect extrapolation performance to be poor. Our experimental results agree: while the PINN achieves a small interpolation error ($1e-5$), it exhibits poor extrapolation behavior, achieving max $L^2$ relative errors of 0.94, and 4.27 (in the real and imaginary domain respectively). On the other hand, the Beltrami flow does not exhibit a spectral shift over time for any of the solution functions. The PINN achieves similarly small interpolation error ($1e-5$) and produces very small $L^2$ relative extrapolation errors of 0.009, 0.013, 0.006, and 0.008 in $u, v, w,$ and $p$, respectively. This is in line with what we would expect based on the lower-dimensional examples consider so far, and is in fact comparable to the diffusion-reaction equation. 4 MITIGATING EXTRAPOLATION FAILURES WITH TRANSFER LEARNING Finally, we show that transfer learning from PINNs trained across a family of similar PDEs can improve extrapolation performance. Empirically, in other domains, transfer learning across multiple tasks has been effective in improving generalization (Dong et al., 2015; Luong et al., 2016). Here, we perform transfer learning following the procedure outlined in Pellegrin et al. (2022), where we initially train a PINN with multiple outputs on a sample from a family of PDEs (e.g. the Burgers’ equation with varying values of the viscosity) and transfer to a new unseen PDE in the same family (e.g. the Burgers’ equation with a different viscosity) by freezing all but the last layer and training with the loss this new PDE induces. We note that Pellegrin et al. (2022) only consider transfer learning for linear PDEs by analytically computing the final PINN layer but we extend their method to nonlinear PDEs by performing gradient descent to learn the final layer instead. 4.1 TRANSFER LEARNING CAN HELP WITH SPECTRAL SHIFTS We perform transfer learning from a collection of Burgers’ equations with varying viscosities ($\nu/\pi = \{0.01, 0.05, 0.1\}$) to a new Burgers equation ($\nu/\pi = 0.075$). In the first set of experiments, we train on equations in the domain $t \in [0, 0.5]$, and in the second set, we train on equations in the domain $t \in [0, 1]$. Similarly, for the non-linear Schrodinger equation we transfer learn on equations with slightly varying initial conditions $(h(x, 0) \in \{1.95\text{sech}(x), 2.05\text{sech}(x), 2.1\text{sech}(x)\})$. We evaluate on a new non-linear Schrodinger equation with initial condition $h(x, 0) = 2\text{sech}(x)$. Our results are reported in Table 1. We perform 15 runs for each, changing only the random seed. Compared to the baseline (no transfer learning), we find an average reduction in extrapolation error of 82% when transfer learning from the full domain, and of 51% when transfer learning from half the domain, i.e. with $t \in [0, 0.5]$ for the Burger’s equation. The improvements for the non-linear Schrodinger equation are similar, although slightly smaller. Transfer learning from the full domain reduces the extrapolation error in the real (imaginary) component of the solution by 55% (51%). Transfer learning from half the domain still reduces it by 32% (30%). Details on the same transfer | Setting | L² Relative Extrapolation Error | |--------------|---------------------------------| | | Burger’s Eq. | Schrodinger (real) | Schrodinger (imag.) | | Baseline | 0.383 ± 0.143 | 0.944 ± 0.212 | 4.276 ± 0.538 | | Transfer (half) | 0.189 ± 0.116 | 0.630 ± 0.227 | 2.963 ± 0.599 | | Transfer (full) | 0.072 ± 0.065 | 0.423 ± 0.201 | 2.074 ± 0.526 | Table 1: $L^2$ extrapolation errors for the baseline (no transfer learning), transfer learning from $t \in [0, 0.5]$ (half), and transfer learning from $t \in [0, 1]$ (full). Values obtained from 15 MLPs per setting. learning experiments for the Allen-Cahn equation, as well as visualizations of the learned solutions can be found in subsection A.8 in the appendix. Why does transfer learning help? By transfer learning from other PDEs that exhibit similar spectral shifts, we hope that the model can learn to recognize PDEs that exhibit these shifting spectra and modify its predictions accordingly. As we freeze all but the last layer when performing transfer learning, one can think of this as projecting the new PDE onto a shared feature space, one of these features potentially capturing the degree to which the underlying spectra shift over time. Given that the initial training is conducted on a larger temporal domain, the hope is that even if the model is trained on a new PDE only from $t = 0$ to $t = 0.5$, its understanding of frequency shifts from similar PDEs (for which it knows how the spectra evolve/shift from $t = 0$ to $t = 1$) will allow it to extrapolate better than it otherwise would. To give some evidence to support this intuition, our transfer learning experiments use Burgers’ equations with similar viscosities ($\nu$) to the target PDE – and thus similar spectral shift. We find that additional transfer learning on more PDEs, with viscosities that are further from that of the target PDE, seems to make a minimal impact. Motivated by Kim et al. (2020), we can also examine the interpolation and extrapolation loss of each run as well as decomposed into domain and boundary terms (recall Section 2) using the example of the Burger’s equation in Figure 5. We observe that transfer learning from PDEs on the whole domain ($t \in [0, 1]$) substantially improves results compared to baseline. However, we find that transfer learning even when the model does not see the extrapolation domain during initial training (e.g. $t \in [0, 0.5]$) also improves performance over baseline, though less than transfer learning from the full domain. We find the reverse in interpolation: our baseline model has the lowest interpolation error, followed by half-domain transfer learning, and then full-domain transfer learning, which performs... the worst in interpolation. This may suggest that transfer learning enforces stronger inductive biases from the wider PDE family which in turn improves extrapolation performance. 4.2 Without spectral shifts, transfer learning yields no improvements We repeat the experiments in the previous subsection with PDEs that exhibit no spectral shifts to test whether transfer learning can further boost extrapolation performance. We transfer learn on Diffusion-Reaction equations with different amplitude parameters (recall section 4, here $M = \{0.5, 2, 3\}$) and evaluate on a Diffusion-Reaction equation with amplitude parameter $M = 1$. As a high-dimensional analogue, we transfer learn on the Beltrami Flow PDE with $Re = \{0.95, 1.05, 1.1\}$ and evaluate on $Re = 1$. We present our results in table 2. | Setting | Diff.-Reac. | Beltrami (u) | Beltrami (v) | Beltrami (w) | Beltrami (p) | |------------------|-------------|--------------|--------------|--------------|--------------| | Baseline | 0.038 ± 0.021 | 0.009 ± 0.004 | 0.013 ± 0.006 | 0.006 ± 0.003 | 0.008 ± 0.004 | | Transfer (half) | 0.051 ± 0.033 | 0.011 ± 0.005 | 0.009 ± 0.007 | 0.007 ± 0.003 | 0.006 ± 0.003 | | Transfer (full) | 0.043 ± 0.024 | 0.008 ± 0.004 | 0.012 ± 0.007 | 0.006 ± 0.005 | 0.007 ± 0.005 | Table 2: $L^2$ extrapolation errors for the baseline (no transfer learning), transfer learning from $t \in [0, 0.5]$ (half), and transfer learning from $t \in [0, 1]$ (full). Values obtained from 15 MLPs per setting. Unlike with the PDEs in the previous section, which showed a significant spectral shift, we find no improvement in extrapolation performance for the Diffusion-Reaction equation or the Beltrami Flow after transfer learning. In line with our reasoning for why transfer learning helps with spectral shifts, we suspect that because there are no spectral shifts in any of the PDEs considered, there is nothing for the model to pick up while transfer learning. Similarly, in the absence of spectral shifts, stronger inductive biases need not improve extrapolation, and in fact might make it harder. 5 Discussion In this paper, we revisited PINNs’ extrapolation behavior and pushed back against claims previously made in the literature. In our experiments on the effects of different architecture choices, we found evidence against a double-descent phenomenon for the extrapolation error, which Zhu et al. (2022) speculated might exist. We also saw that PINNs do not necessarily perform poorly in extrapolation, as was previously suspected (Kim et al., 2020; Kapoor et al., 2023). For some PDEs, near perfect extrapolation is possible. Following this, we examined the solution space learned by PINNs in the Fourier domain and argued that extrapolation performance depends on spectral shifts in the underlying PDE. We showed that the presence of high frequencies in the solution function has minimal effect on extrapolation, pushing back against Bonfanti et al. (2023), and demonstrated that PINNs’ extrapolation errors can be predicted from the Fourier spectra of the solution function. To this end, we introduced the Weighted Wasserstein-Fourier distance between interpolation and extrapolation domains. Finally, we provided the first investigation of the effects of transfer learning on extrapolation behavior in PINNs and demonstrated that transfer learning can help mitigate the effects of spectral shifts. Limitations. There are several avenues for further investigation. We believe that extending our analysis from standard PINNs to other architectures or sampling methods is a promising direction. Future research might, for example, try to answer whether some PINN variants can deal better with spectral shifts than others and why. Furthermore, in the present work, we only examined the two most common activation functions, sin and tanh, and found them to lead to similar model performance in extrapolation. While this is in line with experiments presented in related works Kim et al. (2020), investigating activation functions specifically introduced for improved extrapolation performance in MLPs, such as Ziyin et al. (2020), could also prove insightful. Ultimately, we believe that a theoretical investigation of PINNs’ difficulties with spectral shifts in the fashion of Wang et al. (2020) could significantly deepen our understanding of these models’ capabilities. REFERENCES Cosmin Anitescu, Elena Atroshchenko, Naif A. Alajlan, and Timon Rabczuk. Artificial neural network methods for the solution of second order boundary value problems. *Computers, Materials & Continua*, 2019. Shamsulhaq Basir. Investigating and mitigating failure modes in physics-informed neural networks (pinn). *arXiv preprint arXiv:2209.09988*, 2022. Andrea Bonfanti, Roberto Santana, Marco Ellero, and Babak Gholami. On the hyperparameters influencing a pinn’s generalization beyond the training domain, 2023. Shengze Cai, Zhicheng Wang, Sifan Wang, Paris Perdikaris, and George Em Karniadakis. Physics-informed neural networks for heat transfer problems. *Journal of Heat Transfer*, 143(6):060801, 2021. Yuan Cao, Zhiying Fang, Yue Wu, Ding-Xuan Zhou, and Quanquan Gu. Towards understanding the spectral bias of deep learning. *arXiv preprint arXiv:1912.01198*, 2019. N Scott Cardell, Wayne Joerding, and Ying Li. Why some feedforward networks cannot learn some polynomials. *Neural computation*, 6(4):761–766, 1994. Elsa Cazelles, Arnaud Robert, and Felipe Tobar. The wasserstein-fourier distance for stationary time series. *IEEE Transactions on Signal Processing*, 69:709–721, 2020. Miaomiao Chen, Ruiping Niu, and Wen Zheng. Adaptive multi-scale neural network with resnet blocks for solving partial differential equations. *Nonlinear Dynamics*, 111(7):6499–6518, 2023. Salvatore Cuomo, Vincenzo Schiano Di Cola, Fabio Giampaolo, Gianluigi Rozza, Maziar Raissi, and Francesco Piccialli. Scientific machine learning through physics–informed neural networks: Where we are and what’s next. *Journal of Scientific Computing*, 92(3):88, 2022. David Davini, Bhargav Samineni, Benjamin James Thomas, Huong Tran, and Cherlin Zhu. Using physics-informed regularization to improve extrapolation capabilities of artificial neural networks. In *2022 Virtual Joint Mathematics Meetings (JMM 2022)*. AMS. Nguyen Anh Khoa Doan, Wolfgang Polifke, and Luca Magri. Physics-informed echo state networks for chaotic systems forecasting. In *International Conference on Conceptual Structures*, 2019. Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. Multi-task learning for multiple language translation. In *Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)*, pages 1723–1732, Beijing, China, July 2015. Association for Computational Linguistics. doi: 10.3115/v1/P15-1166. URL: https://aclanthology.org/P15-1166 Pamela J Haley and DONALD Soloway. Extrapolation limitations of multilayer feedforward neural networks. In [*Proceedings 1992* IJCNN international joint conference on neural networks], volume 4, pages 25–30. IEEE, 1992. Jihun Han and Yoonsang Lee. Hierarchical learning to solve partial differential equations using physics-informed neural networks. *arXiv preprint arXiv:2112.01254*, 2021. Ameya D Jagtap and George E Karniadakis. Extended physics-informed neural networks (xpinns): A generalized space-time domain decomposition based deep learning framework for nonlinear partial differential equations. In *AAAI spring symposium: MLPS*, volume 10, 2021. Taniya Kapoor, Abhishek Chandra, Daniel M Tartakovsky, Hongrui Wang, Alfredo Nunez, and Rolf Dollevoet. Neural oscillators for generalization of physics-informed machine learning. *arXiv preprint arXiv:2308.08989*, 2023. Jungeun Kim, Kookjin Lee, Dongeun Lee, Sheo Yon Jin, and Noseong Park. Dpm: A novel training method for physics-informed neural networks in extrapolation, 2020.
krIOxfqsOh
What is the architecture of MaskMA? Is it also based on a Decision Transformer? I didn't find value as input in the transformer architecture shown in Figure 2. How did you train MaskMA? Is it similar to supervised learning in a Decision transformer?
Masked Pretraining for Multi-Agent Decision Making Anonymous authors Paper under double-blind review Abstract Building a single generalist agent with zero-shot capability has recently sparked significant advancements in decision-making. However, extending this capability to multi-agent scenarios presents challenges. Most current works struggle with zero-shot capabilities, due to two challenges particular to the multi-agent settings: a mismatch between centralized pretraining and decentralized execution, and varying agent numbers and action spaces, making it difficult to create generalizable representations across diverse downstream tasks. To overcome these challenges, we propose a Masked pretraining framework for Multi-agent decision making (MaskMA). This model, based on transformer architecture, employs a mask-based collaborative learning strategy suited for decentralized execution with partial observation. Moreover, MaskMA integrates a generalizable action representation by dividing the action space into actions toward self-information and actions related to other entities. This flexibility allows MaskMA to tackle tasks with varying agent numbers and thus different action spaces. Extensive experiments in SMAC reveal MaskMA, with a single model pretrained on 11 training maps, can achieve an impressive 77.8% zero-shot win rate on 60 unseen test maps by decentralized execution, while also performing effectively on other types of downstream tasks (e.g., varied policies collaboration and ad hoc team play). 1 Introduction Foundation model is a large model trained on vast data and can easily generalize across various downstreaming tasks in natural language processing, called emergent behavior. The powerful foundation models (Ouyang et al., 2022; Touvron et al., 2023; Brown et al., 2020; Ramesh et al., 2022; Rombach et al., 2022; Radford et al., 2021) bring artificial intelligence techniques to the daily life of people, serving as the assistant to boost the development of various industries. The reinforcement learning community (Chen et al., 2021; Carroll et al., Liu et al., Janner et al., 2021, 2022) has shown a growing interest in designing simple yet effective foundation models and training strategies tailored to decision-making. A natural follow-up question is how to build a foundation model that serves as a single generalist agent with strong zero-shot capability for multi-agent decision-making. Compared to single-agent scenarios, directly utilizing transformers for centralized pretraining in multi-agent settings encounters two primary challenges. (1) Mismatch between centralized pretraining and decentralized execution. Multi-agent decision-making typically follows centralized training with a decentralized execution approach. However, transformers, as a centralized training architecture, utilize all units as inputs. This misaligns with the decentralized execution phase where each agent’s perception is limited to only nearby units, significantly impacting performance. (2) Varying numbers of agents and actions. Downstream tasks have different numbers of agents, resulting in varying action spaces. Most existing methods treat multi-agent decision-making as a sequence modeling problem and directly employ transformer architectures, often overlooking or inadequately addressing the aforementioned challenges. For instance, MADT (Meng et al., 2021) circumvents the mismatch challenge by transforming multi-agent pretraining data into single-agent pretraining data and adopting decentralized pretraining with decentralized execution, but this comes at the expense of not fully utilizing the information from all agents during the pretraining stage. Regarding the issue of different action spaces caused by varying agent numbers, MADT takes a simplistic approach by setting a large action space and muting the unavailable actions using an action mask. However, this method Figure 1: **Win rate on training and test maps.** The dashed line (blue) separates the 11 training maps on the left from the 60 test maps on the right. The orange line represents the performance difference between MaskMA and MADT, showcasing how MaskMA outperforms MADT by up to 92.97%. suffers from poor generalization because the same component of the action vector represents different physical meanings in tasks with different numbers of agents. In response, we propose two scalable techniques: a Mask-based Collaborative Learning Strategy (MCLS) and a Generalizable Action Representation (GAR). The two techniques form the basis of a new masked pretraining framework for multi-agent decision-making, named MaskMA. To address the first challenge, we present a transformer with MCLS by incorporating random masking into the attention matrix of the transformer, effectively reconciling the discrepancy between centralized pretraining and partial observations and bolstering the model’s generalization capabilities. To handle the second challenge, MaskMA integrates GAR by categorizing actions into those directed toward the environment and those involving interactions with other units. The former relies solely on self-information, and the latter depends on their interrelationships, respectively. This approach allows MaskMA to excel across tasks with varying agent numbers and action spaces. We evaluate MaskMA’s performance using the StarCraft Multi-Agent Challenge (SMAC) benchmark. To validate the potential of zero-shot, we provide a challenging setting, using only 11 maps for training and 60 maps for testing. Extensive experiments demonstrate that our model significantly outperforms the previous state-of-the-art in zero-shot scenarios. We also provide various downstream tasks to further evaluate the strong generalization of MaskMA, including varied policies collaboration, teammate malfunction, and ad hoc team play. This work lays the groundwork for further advancements in multi-agent fundamental models, with potential applications across a wide range of domains. Our main contributions are as follows: 1. We introduce the masked pretraining framework for multi-agent decision-making (MaskMA), which pre-trains a transformer architecture with a mask-based collaborative learning strategy (MCLS) and a generalizable action representation (GAR). 2. To test MaskMA’s performance, we set up 1) a challenging zero-shot task: training on only 11 maps and testing on 60 different maps in the SMAC (Samvelyan et al., 2019), and 2) three downstream tasks including varied policies collaboration, teammate malfunction, and ad hoc team play. 3. MaskMA is the first multi-agent pretraining model for decision-making with strong zero-shot performance. MaskMA, using a single model pre-trained on 11 training maps, achieves an impressive 77.8% zero-shot win rate on 60 unseen test maps by decentralized execution. 2 RELATED WORK **Decision Making as Sequence Modeling Problem and Pretraining** In recent years, the integration of sequence modeling into decision-making paradigms has emerged as a promising avenue for enhancing reinforcement learning strategies. DT (Chen et al., 2021) casts the reinforcement learning as a sequence modeling problem conditioned on return-to-go, using a transformer to generate optimal action. MaskDP (Liu et al.) utilizes autoencoders on state-action trajectories, learning the environment’s dynamics by masking and reconstructing states and actions. Uni[MASK] (Carroll et al.) expresses various tasks as distinct masking schemes in sequence modeling, using a single model trained with randomly sampled maskings. In this paper, we explore the design of sequences in MARL and how it can be made compatible with the mask-based collaborative learning strategy. Figure 2: MaskMA. MaskMA employs the transformer architecture combined with generalizable action representation trained using a mask-based collaborative learning strategy. It effectively generalizes skills and knowledge from training maps into various downstream tasks, including unseen maps, varied policies collaboration, teammate malfunction, and ad hoc team play. MARL as Sequence Modeling Problem Recently several works collectively contribute to the understanding of MARL as a sequence modeling problem. MADT (Meng et al., 2021) introduces Decision Transformer (Chen et al., 2021) into MARL, significantly improving sample efficiency and achieving strong performance in both few-shot and zero-shot cases in SMAC. MAT (Wen et al., 2022) leverages an encoder-decoder architecture, incorporating the multi-agent advantage decomposition theorem to reduce the joint policy search problem into a sequential decision-making process. Tseng et al. (2022) utilize the Transformer architecture and propose a method that identifies and recombines optimal behaviors through a teacher policy. ODIS (Zhang et al., 2023) trains a state encoder and an action decoder to extract task-invariant coordination skills from offline multi-task data. In contrast, our proposed MaskMA adapts the Transformer architecture to MARL by designing a sequence of inputs and outputs for a generalizable action representation. This approach offers broad generalizability across varying actions and various downstream tasks. Action Representation Recent works have explored semantic action in multi-agent environments. ASN (Wang et al.) focuses on modeling the effects of actions by encoding the semantics of actions to understand the consequences of agent actions and improve coordination among agents. UPDeT (Hu et al., 2021) employs a policy decoupling mechanism that separates the learning of local policies for individual agents from the coordination among agents using transformers. In contrast, MaskMA emphasizes sequence modeling and masking strategies, focusing on the correlation between agents taking actions. While UPDeT concentrates on policy decoupling for improved coordination among agents and ASN is centered on modeling the effects of actions and their interactions in multi-agent environments, MaskMA aims to learn more generalizable skills from training maps, which can be applied to a wide range of downstream tasks. This unique approach allows MaskMA to excel in scenarios involving varied policies collaboration, teammate malfunction, and ad hoc team play. 3 METHOD To achieve zero-shot generalization in multi-agent decision-making tasks, where the agents need to cooperate and learn effective strategies to adapt to various scenarios, we propose MaskMA, a masked pretraining framework for multi-agent decision-making, by leveraging the transformer with generalizable action representation to capture the underlying correlations among agents and their actions while maintaining adaptability to dynamic scenarios. Agents are subject to partial observation in multi-agent tasks, i.e., each agent has limited sight and can only observe part of other agents and other units (e.g., enemy to defeat) in the environment. Existing works, such as those proposed in (Liu et al.) and (Hu et al., 2021), typically train each agent’s policy independently. Specifically, the input to each agent’s policy is its own observation. Such an independent learning pipeline leads to an increase in computational complexity of $O(N^3)$ w.r.t agent numbers $N$. To address these challenges, we introduce Mask-based Collaborative Learning, which employs random masking to train the policies collaboratively, aligning well with the partial observation. Table 1: Win rate on training maps. The offline datasets consist of 10k or 50k expert trajectories per map collected by specific expert policies. With the mask-based collaborative learning strategy, MaskMA consistently demonstrates high performance in both centralized execution (CE) and decentralized execution (DE) settings. Furthermore, MaskMA’s generalizable action representation allows it to easily adapt and converge on maps with diverse characteristics. In contrast, MADT struggles to handle different action spaces and achieves a win rate of only 51.78% even after extensive training. | Map_name | # Episodes | Return Distribution | Ours | MADT | |--------------|------------|---------------------|------|------| | | | | CE | DE | DE | | 3s_vs_5z | 50k | 19.40±1.89 | 85.94±3.49 | 82.81±7.81 | 73.44±3.49 | | 3s5z | 10k | 18.83±2.48 | 98.44±1.56 | 99.22±1.35 | 15.62±6.99 | | 1c3s5z | 10k | 19.51±1.40 | 94.53±4.06 | 95.31±1.56 | 54.69±8.41 | | 3s5z_vs_3s6z| 10k | 19.69±1.27 | 85.94±6.44 | 85.16±5.58 | 14.84±9.97 | | 5m_vs_6m | 10k | 18.37±3.69 | 86.72±1.35 | 84.38±4.94 | 85.94±5.18 | | 8m_vs_9m | 10k | 19.12±2.57 | 88.28±6.00 | 86.72±4.06 | 87.50±2.21 | | MMM2 | 50k | 18.68±3.42 | 92.97±2.59 | 86.72±4.62 | 62.50±11.69 | | 2c_vs_64zg | 10k | 19.87±0.48 | 99.22±1.35 | 92.97±2.59 | 34.38±9.11 | | corridor | 10k | 19.44±1.61 | 96.88±3.83 | 94.53±2.59 | 21.88±11.48 | | 6h_vs_8z | 10k | 18.72±2.33 | 75.00±5.85 | 76.56±6.44 | 27.34±6.77 | | bane_vs_bane| 10k | 19.61±1.26 | 96.09±2.59 | 98.44±1.56 | 91.41±4.62 | | average | ~ | 19.20±2.04 | 90.91±3.56 | 89.35±3.92 | 51.78 ±7.27 | ### 3.1 Formulation We exploit a decentralized partially observable Markov decision process (Oliehoek & Amato, 2015) to define a cooperative multi-agent task, denoted as $G = \langle S, U, A, P, O, r, \gamma \rangle$. Here $S$ represents the global state of the environment, and $U = \{u_1, u_2, ..., u_N\}$ denotes the set of $N$ units, where the first $M$ units are the agents controlled by the policy and the rest $N - M$ units are uncontrolled units in the environment. $A = A_1 \times A_2 \times ... \times A_M$ is the action space for controllable units. At time step $t$, each agent $u_i \in \{u_1, u_2, ..., u_M\}$ selects an action $a_i \in A_i$, forming a joint action $a \in A$. The joint action $a$ at state $s \in S$ triggers a transition of $G$, subject to the transition function $P(s' | s, a) : S \times A \times S \rightarrow [0, 1]$. Meanwhile, a shared reward function $r(s, a) : S \times A \rightarrow \mathbb{R}$, with $\gamma \in [0, 1]$ denoting the discount factor. We consider a partially observable setting in which each agent $u_i$ makes individual observations $o_i$ to the observation function $o_i = O(s, u_i)$. ### 3.2 Mask-based Collaborative Learning We utilize a standard causal transformer with only encoder layers as our model backbone. The input is the recent $L$ global states $s^{t-L+1}, s^{t-L+2}, ..., s^t$. We define $s^t = \{s^t(u_1), s^t(u_2), ..., s^t(u_N)\}$, i.e., $s^t$ is the union of the states of all units at $t$-th time step. At the input, the state $s^t(u_i)$ of each unit $u_i$ at each time step $t$ corresponds to a token, resulting in total $L \times N$ tokens. Note that $s^t(u_i)$ only contains the state of the entity itself and does not include any information about other entities. For example, in SMAC, $s^t(u_i)$ includes unit type, position, health, shield, and so on. We define the local observation $o^t_i$ of each unit $u_i$ as the states of all units observed by unit $u_i$ at $t$-th step, namely $o^t_i = \{s^t(u_i) | i \in p^t_i\}$, with $p^t_i$ denoting the indexes of units observable to $u_i$. Previous methods independently learn the policy of each unit $u_i$ with their corresponding $o^t_i$ as the input. On the contrary, in this paper, we propose to randomly mask part of the units in $s^t$ and collaboratively learn the policies of unmasked units. Formally, we randomly select part of the units in $s^t$ for each step $t$ of the $L$ input steps of states, represented by $s^t = \{s^t(u_i) | i \in m^t\}$, and learns the policies of the units $u_i$ in $m^t$ with supervised learning. Specifically, we utilize the attention matrix to implement mask-based collaborative learning. We define the original attention mask matrix $m_o$, the mask matrix $m_r$ with elements that have a certain probability of being 1, the final mask matrix $m$ used by MaskMA, as well as some intermediate matrices $m_1, m_2, R$ and $J_2$. The shape of these mask matrices is $(LN \times LN)$, corresponding to $L \times N$ input tokens. We will proceed with the following steps to obtain $m$. Table 2: **Win rate on test maps.** We assessed the performance of MaskMA and other baseline models on a collection of 60 unseen test maps. These models were trained using a set of 11 training maps. The term “Entity” denotes the number of entities present in each individual map, while “Map Numbers” represents the number of maps that fulfill certain conditions. The results demonstrate that MaskMA is an excellent zero-shot learner. | Entity | Map Numbers | Ours | MADT | |--------|-------------|------|------| | | | CE | DE | DE | | ≤ 10 | 23 | 76.26±3.30 | 74.38±3.57 | 43.55±3.94 | | 10 ~ 20| 22 | 83.81±2.85 | 80.08±2.98 | 46.77±3.67 | | > 20 | 15 | 79.01±5.02 | 79.48±3.84 | 39.53±3.61 | | All | 60 | 79.71±3.56 | 77.75±3.42 | 43.72±3.76 | For multi-agent sequential modeling, the mask is casual in the timestep dimension and non-casual within each timestep. Therefore, we have \( m_1 = \text{Diag}(J_1, J_1, ..., J_1) \), where \( J_1 \) is an \( N \times N \) matrix filled with ones, and \( \text{Diag} \) constructs a diagonal matrix \( m_1 \). Then we get \( m_2 = \text{Tri}(J_2) \), where \( J_2 \) is a matrix filled with ones, and \( \text{Tri} \) represents the operation of extracting the lower triangular part. Finally, we get \( m_o = m_1 \lor m_2 \). Define the mask ratio as \( r \), and generate the mask matrix \( m_r = R \geq r \), where \( R \) is a matrix obtained by uniform sampling elements from 0 to 1. Then we get the final mask matrix \( m = m_o \land m_r \). We explore different types of masks, including a set of fixed mask ratios, environment mask, and random mask ratios chosen from \((0, 1)\) for units at each time step. We observe that the implementation of the random mask strategy, which encompasses different fixed ratios and mask types, leads to the acquisition of meaningful skills and knowledge applicable to various downstream tasks. **Execution** We can efficiently shift between centralized and decentralized execution by adjusting the attention mask matrix \( m \). For decentralized execution we alter \( m \) so that each agent only considers surrounding agents during self-attention, while for centralized execution we set \( m \) as \( m_o \). ### 3.3 Generalizable Action Representation We harnessed the transformer’s capability to handle variable-length tokens, i.e., the architecture of MaskMA naturally generalizes to tasks with variable numbers of agents. However, as most multi-agent tasks involve actions that represent interaction among units, e.g., the healing and attacking in SMAC. Therefore each action length also grows up with the unit number. We propose Generalizable Action Representation (GAR) to enable the capacity of MaskMA in dealing with the action space that varies according to unit number. Given an action \( a_i^t \) that involves interaction between two units \( u_i \) and \( u_j \), we define \( u_i \) as the executor of \( a_i^t \) and \( u_j \) the receiver. The embedding \( E(a_i^t) \) of \( a_i^t \) is defined as \( E(a_i^t) = h_i^t \oplus h_j^t \), where \( h_i^t \) and \( h_j^t \) are the output embedding of \( u_i \) and \( u_j \) from the encoder, and \( \oplus \) denotes the concatenation operation. With the \( E(a_i^t) \) defined above, we generate the logits of interactive action by \( FC(E(a_i^t)) \), with \( FC \) denoting a fully-connected layer, and use \( FC(h_i^t) \) for actions that do not involve interaction. These logits are then combined and fed into a softmax function to obtain the final action. ### 4 Experiments In this section, we design experiments to evaluate the following features of MaskMA. (1) Zero-shot and convergence of MaskMA. We conduct experiments on SMAC using only 11 maps for training and up to 60 maps for testing, assessing the model’s ability to generalize to unseen scenarios. In SMAC tasks, agents must adeptly execute a set of skills such as alternating fire, kiting, focus fire, and positioning to secure victory. These attributes make zero-shot transfer profoundly challenging. (2) Effectiveness of mask-based collaborative learning strategy and generalizable action representation for different multi-agent tasks. We conduct ablation studies to find out how the sequence modeling forms of MaskMA affect performance and how the training strategy and generalizable action representation boost the generalization of MaskMA. (3) Generalization of MaskMA to downstream tasks. We Table 3: **Varied Policies Collaboration on 8m_vs_9m.** Cooperating with a different performance player who achieves a 41% win rate, MaskMA demonstrates excellent collaborative performance in diverse scenarios with varying numbers of agents with varied performance. | # Agents with varied performance | 0 | 2 | 4 | 6 | 8 | |---------------------------------|-----|-----|-----|-----|-----| | Win rate | 86.72±4.06 | 89.84±2.59 | 79.69±5.18 | 62.50±7.33 | 41.41±6.00 | Table 4: **Teammate Malfunction on 8m_vs_9m.** "Marine Malfunction Time" indicates the time of a marine malfunction during an episode. For instance, a value of 0.2 means that one marine begins to exhibit a stationary behavior at 1/5th of the episode. Entry 1.0 signifies the original 8m_vs_9m configuration without any marine malfunctions. | Marine Malfunction Time | 0.2 | 0.4 | 0.6 | 0.8 | 1.0 | |-------------------------|-----|-----|-----|-----|-----| | Win Rate | 1.56±1.56 | 37.5±6.99 | 71.09±6.77 | 86.72±2.59 | 86.72±4.06 | evaluate the model’s performance on various downstream tasks, such as varied policies collaboration, teammate malfunction, and ad hoc team play. This helps us understand how the learned skills and strategies can be effectively adapted to different situations. **Setup.** In SMAC (Samvelyan et al., 2019), players control ally units in StarCraft using cooperative micro-tricks to defeat enemy units with built-in rules. Our approach differs from existing methods that only consider grouped scenarios, such as Easy, Hard, and Super-Hard maps. Instead, we extend the multi-agent decision-making tasks by combining different units with varying numbers. We include three races: Protoss (colossus, zealot, stalker), Terran (marauder, marine, and medivac), and Zerg (baneling, zergling, and hydralisk). Note that since StarCraft II does not allow units from different races to be on the same team, we have designed our experiments within this constraint. Firstly, we collect expert trajectories as offline datasets from the 11 training maps by utilizing the expert policies trained with a strong RL method named ACE (Li et al., 2022). This yields 11 offline datasets, most of which contain 10k episodes with an average return exceeding 18. Then, we employ different methods to pretrain on the offline dataset and evaluate their zero-shot capabilities on 60 generated test maps. As shown in Table 1, we run 32 test episodes to obtain the win rate and report the average win rate as well as the standard deviation across 4 seeds. In the results we present, ‘CE’ stands for centralized execution, and ‘DE’ denotes decentralized execution. In cases where no specific notation is provided, the results are based on DE. We take the MADT method as our baseline for comparison which utilizes a causal transformer to consider the history of local observation and action for an agent. ### 4.1 Performance on Pretraining Datasets We assess MaskMA and baselines on offline datasets including 11 training maps. As shown in Table 1, MaskMA achieves a 90% average win rate in 11 maps both for CE and DE, while MADT only achieves a 51.78% win rate for DE and struggles in more challenging maps, even reaching a 14% win rate. One key observation from the results is that MaskMA consistently performs well in both centralized training centralized execution (CTCE) and centralized training decentralized execution (CTDE) settings, highlighting its flexibility and adaptability in various execution paradigms. Figure 3a represents the testing curve of MaskMA and the baseline in 11 training maps. MaskMA significantly outperforms the baseline with lower variance and achieves more than 80% win rate in most maps within 0.5M training steps, showing the robustness and efficiency of MaskMA. While the mask-based collaborative learning strategy introduces a level of complexity that can cause a performance degradation compared to MaskMA without masking during the pretraining phase, it effectively forces MaskMA to adjust to varying ranges of observation, including both global and partial observations and learn robust representations that are beneficial for generalization. ### 4.2 MaskMA as Excellent Zero-shot Learners We present the results of our MaskMA and the baseline on zero-shot learning tasks in multi-agent scenarios. Specifically, we evaluate different methods by the win rate on the 60 unseen test maps. Table 5: **Ad hoc Team Play on 7m_vs_9m**. "Marine Inclusion Time" indicates the time of adding an additional marine during an episode. For example, a value of 0.2 represents adding one marine at 1/5th of the episode. Entry 1.0 signifies the original 7m_vs_9m setup without any additional marine. | Marine Inclusion Time | 0.2 | 0.4 | 0.6 | 0.8 | 1.0 | |-----------------------|---------|---------|---------|---------|---------| | Win Rate | 80.47±7.12 | 78.12±2.21 | 50.00±8.84 | 10.94±6.81 | 0±0 | Table 6: **Ablation over mask-based collaborative learning strategy (MCLS) and generalizable action representation (GAR)**. Baseline utilizes a transformer architecture. Each row adds a new component to the baseline, showcasing how each modification affects the overall performance. | Setting | CE | DE | |--------------------------|----------|----------| | Transformer | 44.67±3.35 | 8.03 ±1.44 | | + MCLS | 39.49±3.05 | 39.91±3.97 | | + GAR | 91.26±4.21 | 41.55±4.38 | | MaskMA (full model) | 90.91±3.56 | 89.35±3.92 | Table 2 shows that MaskMA outperforms the baseline method in zero-shot scenarios by a large margin, successfully transferring knowledge to new tasks without requiring any additional fine-tuning. Specifically, MaskMA achieves a 79.71% win rate for CE and a 77.75% win rate for DE, while MADT only achieves a 43.72% win rate. These results indicate that MaskMA’s mask-based collaborative learning strategy and generalizable action representation effectively address the challenges of partial observability and varying agent numbers and action spaces in multi-agent environments. Furthermore, we can observe that MaskMA consistently performs well across varying levels of complexity, as demonstrated by the win rates in different entity groups. In contrast, MADT achieves limited performance with high variance across different entity groups. This highlights the ability of MaskMA to generalize and adapt to diverse scenarios, which is a key feature of a robust multi-agent decision, making it a versatile and reliable choice for multi-agent tasks. ### 4.3 Performance on Downstream Tasks In this section, we provide various downstream tasks to further evaluate the strong generalization of MaskMA, including varied policies collaboration, teammate malfunction, and ad hoc team play. **Varied Policies Collaboration.** In this task, partial agents are controlled by the best policy, and other agents are controlled by other policies with varied performance, as it requires generalized policies to coordinate with different operations at various levels. We conducted simulations using a model with average performance (win rate of 41%) to represent a player with a different policy in the 8m_vs_9m map, where our team controlled 8 marines to defeat 9 enemy marines. As shown in Table 3, MaskMA exhibits seamless collaboration with other agents under different scenarios where varying numbers of agents have different operations and performance. MaskMA dynamically adapts to the strategies of other players and effectively coordinates actions. Furthermore, when the number of agents with different performance is 8, MaskMA itself does not control any agents. Therefore, the win rate in this case can represent the win rate of the players controlled by different policies and humans. **Teammate Malfunction.** In this task, teammates may get out of order or die due to external factors during inference. MaskMA is designed to handle such situations gracefully by redistributing tasks among the remaining agents and maintaining overall performance. As shown in Table 4, MaskMA exhibits robustness and adaptability in the face of unexpected teammate malfunction. **Ad hoc Team Play.** In this task, agents need to quickly form a team with new agents during the execution of the task. The challenge lies in the ability of the model to incorporate new agents into the team and allow them to contribute effectively without disrupting the ongoing strategy. As shown in Table 5, MaskMA demonstrates excellent performance in ad hoc team play scenarios, adjusting its strategy to accommodate new agents and ensuring a cohesive team performance. Table 7: **Mask type ablation.** We compare various mask types for pretraining with fixed ratios from 0 to 0.8 and random ratios. Env represents using the local visibility of the agent in the environment. | Mask Type | 0 | 0.2 | 0.5 | 0.8 | Env | Random (0, 1) | |-----------|---------|---------|---------|---------|----------|--------------| | CE | 91.26±4.21 | 89.70±3.81 | 88.21±3.78 | 82.81±4.83 | 55.97±4.67 | 90.91±3.56 | | DE | 41.55±4.38 | 58.03±5.70 | 71.52±4.23 | 82.03±5.01 | 83.59±8.08 | 89.35±3.92 | Overall, the results in this section demonstrate the versatility and generalization capabilities of MaskMA across various downstream tasks. These findings highlight the potential of MaskMA to advance the field of multi-agent and its applicability in real-world scenarios. ### 4.4 Ablation Study We perform ablation studies to access the contribution of each individual component: mask-based collaborative learning strategy and generalizable action representation. Our results are reported in Table 6, where we compare the performance of removing each component from MaskMA along with our modifications to the architecture. Furthermore, we conduct ablation studies to understand the influence of hyperparameters including timestep length and sight mask ratio. **Generalizable Action Representation.** We ablate the generalizable action representation by comparing our proposed action space to an alternative action space, which aligns the maximum action length with a specific action mask for each downstream task. As shown in Table 6, removing the generalizable action space leads to significant performance degradation (row 4th and row 2nd), emphasizing its importance in improving the model’s generalization capabilities. **Mask-based Collaborative Learning Strategy.** Table 6 (row 4th and row 3rd) shows that the model without masked training struggles to generalize to new settings, exhibiting significant performance degradation. The mask-based collaborative learning strategy employed in MaskMA, while posing a challenging pretraining task, helps the model learn permanent representations that are useful for generalization. This is evident from the performance improvement in the CE setting, where MaskMA demonstrates a better capacity to adapt to local observation situations compared to the one without the mask-based collaborative learning strategy. Intuitively, the random mask ratio is consistent with the inference process, where the number of enemies and allies gradually increases in an agent’s local observation due to cooperative micro-operations, such as positioning, kiting, and focusing fire. It is important to note that our “Transformer” column in Table 6 essentially represents behavior cloning and our method outperforms behavior cloning by a significant margin. Furthermore, we provide mask ratio analysis as shown in Table 7. The results show that as the masking ratio increases, the performance of the model improves for decentralized execution (DE) while decreasing for centralized execution (CE). This suggests that an appropriate masking ratio helps strike a balance between learning useful representations and maintaining adaptability to dynamic scenarios in the agent’s local observation. In conclusion, a random ratio mask is a simple yet effective way, considering the CE and DE, to absorb the advantages of various fixed ratio masks and env masks. This approach allows MaskMA to demonstrate strong performance in both centralized and decentralized settings while maintaining the adaptability and generalization necessary for complex multi-agent tasks. **Timestep Length.** To assess the importance of access to previous states, we ablate on the timestep length K. As shown in Figure 3b, MaskMA performance is better when using a longer timestep length. One hypothesis is that the POMDP property of the SMAC environment necessitates that policies in SMAC take into account sufficient historical information in order to make informed decisions. Considering the balance between performance and efficiency, we use K=10 in other experiments. This choice allows MaskMA to leverage enough historical information to make well-informed decisions while maintaining a reasonable level of computational complexity. **Zero-Shot Capability with Pretraining Map Numbers** Figure 3c demonstrates the relationship between zero-shot capability and the number of pretrained maps in MaskMA. As the number of training maps increases, the win rate also improves, indicating that the model is better equipped Figure 3: (a) Learning curve. MaskMA consistently outperforms MADT on average win rate in 11 training maps. (b) Ablation on timestep length. MaskMA performs better when using a longer timestep length. (c) Ablation on pretraining map numbers. With the increasing number of training maps, especially from 5 to 8, the model’s performance on various unseen maps also improves, indicating better generalization to new tasks. to tackle new situations. A marked uptick in win rate is observed when the map count rises from 5 to 8, underlining the value of training the model across varied settings. This trend in MaskMA offers exciting prospects for multi-agent decision-making. It implies that by augmenting the count of training maps or integrating richer, more intricate training scenarios, the model can bolster its adaptability and generalization skills. Training Cost and Parameter Numbers MaskMA processes the inputs of all agents concurrently, achieving a notable degree of parallelism superior to MADT, which transforms multi-agent pretraining data into single-agent data. Consequently, MaskMA is considerably more time-efficient than MADT when trained over identical epochs. Specifically, MaskMA completes pretraining on 11 maps in 31 hours, whereas MADT requires 70 hours. For an equitable comparison, both MaskMA and MADT employ transformers of the same architecture. The sole distinction is in the final fully connected (FC) layer responsible for action output, making the parameter count for both models nearly identical. 5 LIMITATIONS AND FUTURE WORK Comparison to More Specialized Models In our study, we focused on utilizing sequence modeling and masking strategies for Multi-Agent decision-making. Although we achieved promising results, comparing MaskMA with specialized models designed for specific tasks or environments could offer deeper insights. In the future, we aim to conduct a comprehensive evaluation of MaskMA against these specialized models to better understand the strengths and weaknesses of MaskMA. More Data with Different Quality Our current evaluation was based on a limited dataset, which may not fully represent the diverse range of possible agent interactions and environmental conditions. We plan to explore the impact of different data qualities on the performance of our method. By including datasets with varying levels of noise, complexity, and agent behavior, we aim to gain a better understanding of our model’s robustness and sensitivity to data quality. This will help us further refine MaskMA and enhance its performance in real-world scenarios with diverse data sources. 6 CONCLUSION In this paper, we have addressed the challenges of zero-shot generalization and adaptability in multi-agent decision-making. To tackle these challenges, we introduced MaskMA, a masked pretraining framework for multi-agent decision-making that employs a transformer architecture, mask-based collaborative learning strategy, and generalizable action representation. Our proposed framework enables the model to learn effective representations and strategies by capturing the underlying correlations among agents and their actions while maintaining adaptability to dynamic scenarios. Extensive experiments on SMAC demonstrate the effectiveness of MaskMA in terms of zero-shot performance, generalization, and adaptability to various downstream tasks, such as varied policies collaboration, teammate malfunction, and ad hoc team play. Our findings encourage further exploration of more sophisticated masking strategies and efficient pretraining techniques for multi-agent decision-making. REFERENCES Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Micah Carroll, Orr Paradise, Jessy Lin, Raluca Georgescu, Mingfei Sun, David Bignell, Stephanie Milani, Katja Hofmann, Matthew Hausknecht, Anca Dragan, et al. Uni [mask]: Unified inference in sequential decision problems. In *Advances in Neural Information Processing Systems*. Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning via sequence modeling. *Advances in neural information processing systems*, 34:15084–15097, 2021. Siyi Hu, Fengda Zhu, Xiaojun Chang, and Xiaodan Liang. Updet: Universal multi-agent reinforcement learning via policy decoupling with transformers. *arXiv preprint arXiv:2101.08001*, 2021. Michael Janner, Qiyang Li, and Sergey Levine. Offline reinforcement learning as one big sequence modeling problem. *Advances in neural information processing systems*, 34:1273–1286, 2021. Michael Janner, Yilun Du, Joshua Tenenbaum, and Sergey Levine. Planning with diffusion for flexible behavior synthesis. In *International Conference on Machine Learning*, pp. 9902–9915. PMLR, 2022. Chuming Li, Jie Liu, Yinmin Zhang, Yuhong Wei, Yazhe Niu, Yaodong Yang, Yu Liu, and Wanli Ouyang. Ace: Cooperative multi-agent q-learning with bidirectional action-dependency. In *Proceedings of the AAAI conference on artificial intelligence*, 2022. Fangchen Liu, Hao Liu, Aditya Grover, and Pieter Abbeel. Masked autoencoding for scalable and generalizable decision making. In *Advances in Neural Information Processing Systems*. Linghui Meng, Muning Wen, Yaodong Yang, Chenyang Le, Xiyun Li, Weinan Zhang, Ying Wen, Haifeng Zhang, Jun Wang, and Bo Xu. Offline pre-trained multi-agent decision transformer: One big sequence model tackles all smac tasks. *arXiv e-prints*, pp. arXiv–2112, 2021. Frans A Oliehoek and Christopher Amato. A concise introduction to decentralized pomdps, 2015. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. *Advances in Neural Information Processing Systems*, 35:27730–27744, 2022. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In *International Conference on Machine Learning*, pp. 8748–8763. PMLR, 2021. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. *arXiv preprint arXiv:2204.06125*, 2022. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 10684–10695, 2022. Mikayel Samvelyan, Tabish Rashid, Christian Schroeder de Witt, Gregory Farquhar, Nantas Nardelli, Tim GJ Rudner, Chia-Man Hung, Philip HS Torr, Jakob Foerster, and Shimon Whiteson. The starcraft multi-agent challenge. In *Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems*, pp. 2186–2188, 2019. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models. *arXiv preprint arXiv:2302.13971*, 2023.
4uaogMQgNL
Quantitatively, the UpFusion 3D model has much better numbers than the 2D model, but visually it loses a lot of geometric details compared to the 2D results. Is it limited by the representation power of the 3D NeRF? Or is it because the learned features are not very view-consistent?
UpFusion: Novel View Diffusion from Unposed Sparse View Observations Anonymous authors Paper under double-blind review Figure 1: 3D Inference from Unposed Sparse views. Given a sparse set of input images without associated camera poses, our proposed system UpFusion allows recovering a 3D representation and synthesizing novel views. Top: 1, 3, or 6 input images of an object. Bottom: Synthesized novel views using our approach. Abstract We propose UpFusion, a system that can perform novel view synthesis and infer 3D representations for an object given a sparse set of reference images without corresponding pose information. Current sparse-view 3D inference methods typically rely on camera poses to geometrically aggregate information from input views, but are not robust in-the-wild when such information is unavailable/inaccurate. In contrast, UpFusion sidesteps this requirement by learning to implicitly leverage the available images as context in a conditional generative model for synthesizing novel views. We incorporate two complementary forms of conditioning into diffusion models for leveraging the input views: a) via inferring query-view aligned features using a scene-level transformer, b) via intermediate attentional layers that can directly observe the input image tokens. We show that this mechanism allows generating high-fidelity novel views while improving the synthesis quality given additional (unposed) images. We evaluate our approach on the Co3Dv2 dataset and demonstrate the benefits of our method over pose-reliant alternates. Finally, we also show that our learned model can generalize beyond the training categories, and hope that this provides a stepping stone to reconstructing generic objects from in-the-wild image collections. 1 INTRODUCTION The long-standing problem of recovering 3D objects from 2D images has witnessed remarkable recent progress. In particular, recent neural field-based methods (Mildenhall et al., 2020) excel at recovering highly detailed 3D models of objects or scenes given densely sampled multi-view observations. However, in real-world scenarios such as casual capture settings and online marketplaces, obtaining dense multi-view images is often impractical. Instead, only a limited set of observed views may be available, often leaving some aspects of the object unobserved. With the goal of reconstructing similarly high-fidelity 3D objects in these settings, several learning-based methods (Yao et al., 2018; Yu & Gao, 2020; Zou et al., 2023) have pursued the task of sparse-view 3D inference. While these methods can yield impressive results, they crucially rely on known accurate camera poses for the input images, which are often only available in synthetic settings or using privileged information in additional views, and are thus not currently applicable for in-the-wild sparse-view reconstruction where camera poses are not available. In this work, we seek to overcome the limitation of requiring known camera poses and address the task of 3D inference given unposed sparse views. Unlike pose-aware sparse-view 3D inference methods which use geometry-based techniques to leverage the available input, we introduce an approach that can implicitly use the available views for novel-view generation. Specifically, we designate one of the input images as an anchor to define a coordinate frame, and adopt a scene-level transformer (Sajjadi et al., 2022) that implicitly incorporates all available input images as context to compute per-ray features for a desired query viewpoint. Utilizing these query-aligned features, we can train a conditional denoising diffusion model to generate novel-view images. However, we observe that relying solely on query-aligned features learned from unposed input views does not fully utilize the available context. To further enhance the instance-specificity in the generations, we propose to also add ‘shortcuts’ via attention mechanism in the diffusion process to allow direct attending to the input view features during the generation. Furthermore, to enable generalization to unseen categories during training, we adopt a pretrained 2D foundation diffusion model (Rombach et al., 2022; Zhang & Agrawala, 2023) as initialization and adapt it to leverage the two forms of context-based conditioning. Finally, the novel view images synthesized from the learned diffusion model, despite high fidelity, may not guarantee 3D consistency. Therefore, we additionally extract 3D-consistent models via score-based distillation (Poole et al., 2022; Zhou & Tulsiani, 2023). We present results using the challenging real-world dataset, Co3Dv2 (Reizenstein et al., 2021), which comprises multi-view sequences from 51 categories with 6-DoF pose variations. Given our unposed inference setup, we also introduce ‘alignment invariant’ versions of common evaluation metrics to account for the possible coordinate mismatch between the predicted and ground-truth 3D representations. We find that our approach allows extracting signal from the available unposed views, and that the performance improves with additional images, and that our system significantly improves over recent pose-aware methods relying on predicted camera poses. Finally, we also demonstrate the ability of our method to generalize beyond the training categories by showcasing its performance on unseen object classes. 2 RELATED WORK 3D from Dense Multi-view Captures. Multi-view observations of a scene naturally provide geometric cues for understanding its 3D structure, and this principle has been leveraged across decades to infer 3D from dense multi-view. Classical Multi-View Stereo (MVS) methods (Furukawa et al., 2015) leverage techniques such as structure from motion (SfM) (Schönberger & Frahm, 2016) to estimate camera poses for dense matching to 3D points. Recent neural incarnations (Mildenhall et al., 2020; Wang et al., 2021a) of these methods have further enabled breakthroughs in terms of the quality of the obtained dense 3D reconstruction. While these methods rely on classical techniques for camera estimation, subsequent approaches (Lin et al., 2021; Bian et al., 2023; Tian et al., 2023) have relaxed this requirement and can jointly estimate geometry and recover cameras. However, these methods are unable to predict unseen regions and crucially rely on densely-sampled images as input – a requirement our work seeks to overcome. Figure 2: **UpSRT** performs novel view synthesis from a set of unposed images. UpSRT consists of an encoder, a decoder, and an MLP. The encoder takes encoded image features as inputs and outputs a set-latent representation $c_s$. The decoder takes query rays as inputs and attends to the set-latent representation to get features $c_d$, which are then fed into MLP to obtain final novel view RGB images. We make use of both $c_s$ and $c_d$ to provide conditional context to our model. **Single-view to 3D.** On the other extreme from dense multi-view methods are approaches that aim to reconstruct a 3D representation from just a single view. While easily usable, developing such systems is highly challenging as it requires strong priors to recover unknown information. A common paradigm used to address this program is training models conditioned on encoded image features to directly predict 3D geometry (e.g., voxels (Girdhar et al., 2016), meshes (Wang et al., 2018; Gkioxari et al., 2019; Ye et al., 2021), point clouds (Fan et al., 2017), or implicit functions (Mescheder et al., 2019; Xu et al., 2019; Cheng et al., 2023)). However, given the uncertain nature of the task, these methods have regression-based objectives which limits their generation quality. More recently, there has been growing interest in distilling large text-to-image diffusion models (Song et al., 2020; Saharia et al., 2022; Rombach et al., 2022) to generate 3D representations (Poole et al., 2022; Wang et al., 2023a,b; Chen et al., 2023). Building upon their advances, several distillation-based (Liu et al., 2023b; Qian et al., 2023; Deng et al., 2023; Metas-Kyriazi et al., 2023; Tang et al., 2023; Xu et al., 2022) and distillation-free (Liu et al., 2023a,c) single image to 3D methods were proposed. While these methods can infer detailed 3D, they cannot benefit from additional information provided by extra posed or unposed views. Moreover, as they hallucinate details in unobserved regions, the reconstructed object may significantly differ from the one being imaged. If a user aims to faithfully capture a specific object of interest in detail, single-view methods are fundamentally ill-suited for this task. **Sparse-view to 3D.** With the goal of reducing the burden in the multi-view capture process while still enabling detailed capture of specific objects of interest, there has been a growing interest in sparse-view 3D inference methods. By leveraging the benefits of both multi-view geometry and learning, regression-based methods achieve 3D consistency by using re-projected features obtained from input views (Reizenstein et al., 2021; Wang et al., 2021b; Yu et al., 2021). However, the results tend to be blurry due to the mean-seeking nature of regression methods under uncertainty. To improve the quality of generations, another stream of work (Chan et al., 2023; Rombach et al., 2021; Kulhánek et al., 2022; Zhou & Tulsiani, 2023) formulate the problem as a probabilistic generation task. These methods achieve better perceptual quality, yet usually require precise pose information, which is often not practically available. To overcome this issue, one may either consider leveraging recent sparse-view pose estimation methods (Sinha et al., 2023; Zhang et al., 2022) in conjunction with state-of-the-art novel-view synthesis methods, or consider methods that optimizes poses jointly with the objective of novel-view synthesis (Smith et al., 2023; Jiang et al., 2022). However, the computation of explicit poses may not always be robust, and we empirically show that this leads to poor performance. Closer to our approach, SRT (Sajjadi et al., 2022) and RUST (Sajjadi et al., 2023) allow novel view synthesis without explicit pose estimation (i.e., directly from unposed sparse views). However, their regression-based pipelines limit the quality of the synthesized outputs. ### 3 APPROACH Our goal is to infer a 3D representation of an object given a sparse set of images. While prior works (Yu et al., 2021; Zhou & Tulsiani, 2023; Chan et al., 2023) typically aggregate information from the input views by using geometric projection and unprojection, these crucially rely on the availability Figure 3: **UpFusion 2D** is the proposed conditional diffusion model performing novel view synthesis conditional on information extracted from a set of unposed images. To reason about the query view, UpFusion takes as additional inputs the view-aligned decoder features $c_d$ obtained from UpSRT decoder. To further allow the model to attend to details from input views, UpFusion condition on the set-latent representation $c_s$ via attentional layers. of accurate camera poses which are not readily available in-the-wild. We instead aim to tackle the task of 3D inference given *unposed* sparse views. Towards building a system capable of 3D inference in this unposed setting, we propose a mechanism for implicitly leveraging the available images as context when generating novel views. Specifically, we adapt Unposed Scene Representation Transformer (UpSRT) (Sajjadi et al., 2022), a prior work that leverages transformers as a mechanism for implicitly aggregating information from input views, and computes query-view-aligned features for view synthesis. However, instead of their mean-seeking regression objective which results in blurry renderings, we enable probabilistic sparse view synthesis by using the internal representations of UpSRT to condition a diffusion model to perform novel view synthesis. While our diffusion model can yield high-fidelity generations, the outputs are not 3D consistent. To obtain a consistent 3D representation, we then train instance-specific neural representations (Müller et al., 2022; Tang, 2022) which maximizes the likelihood of the renderings under the learned generative model. We detail our approach below, but first briefly review UpSRT and denoising diffusion models (Ho et al., 2020) that our work builds on. ### 3.1 Preliminaries #### 3.1.1 Unposed Scene Representation Transformer Given a set of $N$ images $\mathcal{I} = \{I_1, I_2, ..., I_N\}$, UpSRT (Sajjadi et al., 2022) seeks to generate novel view images by predicting RGB color $r$ for any query ray $q$. As illustrated in figure 2, it first extracts patch-wise features for each image $I_i$ with an image encoder $U_I$. Then, it uses an encoder transformer $U_E$ to obtain a set latent representation $c_s$. Finally, it uses a decoder transformer $U_D$ which attends to $v_c$, followed by an MLP, to predict the RGB color. In summary, the UpSRT workflow can be represented by the following equations: $$c_s = U_E(\{U_I(I)\}), C(r) = \text{MLP}(U_D(r|c_s))$$ We pre-train an UpSRT model using a pixel-level regression loss and leverage it for subsequent generative modeling. While we follow a similar design, we make several low-level modifications from the originally proposed UpSRT architecture (*e.g.*, improved backbone, differences in positional encoding, etc.), and we expand on these in the appendix. #### 3.1.2 Denosing Diffusion Denoising diffusion models (Ho et al., 2020) seek to learn a generative model over data samples $x$ by learning to reverse a forward process where noise is gradually added to original samples. The learning objective can be reduced to a denoising error, where a diffusion model \( e_\phi \) is trained to estimate the noise added to a current sample \( x_t \): \[ L_{DM} = \mathbb{E}_{x_0,t,\epsilon \sim N(0,1)}[\|\epsilon_t - e_\phi(x_t, t)\|_2^2] \] While the above objective summarizes an unconditional diffusion model, it can be easily adapted to learn conditional generative models \( p(x|y) \) by adding a condition \( y \) (such as a set of unposed images) to the input of the denoising model \( e_\phi(x_t, t, y) \). ### 3.2 Probabilistic View Synthesis using Sparse Unposed Views We aim to learn a generative model over novel views of an object given a sparse set of unposed images. We note that there is an inherent ambiguity in defining the coordinate frame in which this query view is specified, and (partially) resolve this by using the first input image as an anchor to define the coordinate system. Given this, our goal is to learn the distribution \( p(I|\mathcal{I}, \pi) \), where \( \pi \) denotes a query pose, \( \mathcal{I} \) denotes the set of unposed images and \( I \) denotes the query-view image. Instead of learning the distribution directly in pixel space, we follow a common practice of instead learning this distribution in latent space \( p(x|\mathcal{I}, \pi) \), using pre-trained encoders and decoders corresponding to this latent space (Rombach et al., 2022): \( x = E(I); \; I = D(x) \). We model this probability distribution by training a conditional diffusion model which leverages the available unposed images as context, and seek to propose an architecture that embraces several desirable design principles. First, we note that such a diffusion model must be able to (implicitly) reason about the query view it is tasked with generating in the context of the available input, and leverage the UpSRT encoder-decoder framework to enable this. While the decoder features from UpSRT can ground the query-view generation, we note that these may abstract away the salient details in the input, and we propose to complement these by allowing the generative model to directly leverage the patch-wise latent features and more easily ‘copy’ content from input views. Lastly, to enable efficient training and generalization beyond training data, we propose to adapt off-the-shelf diffusion models for view-conditioned generation. **View-aligned Features for Image Generation.** Given a target view \( \pi \), we construct a set of rays \( R \) corresponding to a grid of 2D pixel locations in this view. We query the UpSRT decoder with this set of rays to obtain view-aligned decoder features \( c_d \) of the same resolution as the image latent \( x \). As illustrated in figure 3, these query-aligned features are concatenated with the (noisy) image latents to serve as inputs to the denoising diffusion model. **Incorporating Direct Attention to Input Patches.** To allow the generation model to directly incorporate details visible in the input views, we also leverage the set-latent feature \( c_s \) representation extracted by the UpSRT encoder. Importantly, this representation comprises of per-patch features aligned with the input images and allows efficiently ‘borrowing’ details visible in these images. Unlike the view-aligned decoder feature which can be spatially concatenated with the noisy diffusion input, we condition on these set-latent features via attentional layers in the generation model. **Adapting Large-scale Diffusion Models for Novel-view Synthesis.** Instead of training our generative model from scratch, we aim to take advantage of the strong priors learned by large diffusion models such as Stable Diffusion (Rombach et al., 2022)). To this end, we use a modified version of the ControlNet architecture (Zhang & Agrawala, 2023) to adapt a pre-trained Stable Diffusion model to incorporate additional conditioning \( c_d, c_s \) for view generation. **Putting it Together.** In summary, we reduce the task of modeling \( p(x|\mathcal{I}, \pi) \) to learning a denoising diffusion model \( p_\phi(x|c_d, c_s) \), and leverage the ControlNet architecture to incorporate the two conditioning features and learn a denoising model \( e_\phi(x_t, t, c_d, c_s) \). More specifically, ControlNet naturally allows adding the spatial feature \( c_d \) as via residual connections to the spatial layers of the UNet in a pre-trained Stable Diffusion model. To incorporate the set-level features \( c_s \), we modify the ControlNet encoder blocks to use \( c_s \) in place of a text encoding (see appendix for details). We can train such a model using any multi-view dataset, where we train the denoising diffusion model to generate the underlying image from a query view given a variable number of observed input views. 3.3 Inferring 3D Consistent Representations While the proposed conditional diffusion model can provide high-fidelity renderings from query views, the generated views are not 3D consistent. To obtain a 3D representation given the inferred distribution over novel views, we subsequently optimize an instance-specific neural representation. Towards this, we follow SparseFusion (Zhou & Tulsiani, 2023) which seeks neural 3D modes by optimizing the likelihood of their renderings by adapting a Score Distillation Sampling (SDS) (Poole et al., 2022) loss to view-conditioned generative models. Specifically, we optimize a neural 3D representation $g_\theta$ by ensuring its renderings have high likelihood under our learned distribution $p(\mathbf{I}|\mathcal{T}, \pi)$. We do so by minimizing the difference between the renderings of the instance-specific neural model and the denoised predictions from the learned diffusion model. Denoting by $g_\theta(\pi)$ the rendering of the neural 3D representation from viewpoint $\pi$, and by $\hat{x}_0$ the denoised prediction inferred from the learned diffusion model $\epsilon_\phi(x_t; t, c_d, c_s)$, the training objective can be specified as: $$\mathcal{L}_{3D} = \mathbb{E}_{t, \epsilon, \pi}[\|g_\theta(\pi) - D(\hat{x}_0)\|^2]$$ Unlike SparseFusion (Zhou & Tulsiani, 2023) which additionally uses a rendering loss for the available input views using known cameras, we rely only on the above denoising objective for optimizing the underlying 3D representation given unposed input views. 3.4 Training Details We follow a multi-stage training procedure to optimize our models. We first train the UpSRT model separately using a reconstruction loss on the color predicted for query rays given the set of reference images $\mathbf{I}$. Then, we train the denoising diffusion model while using the conditioning information from the pre-trained UpSRT, which is frozen in this stage. To enable the usage of classifier-free guidance (Ho & Salimans, 2021) during inference, we train our diffusion model in the unconditional mode for a small fraction of the time. We do this by following the condition dropout procedure used in (Brooks et al., 2023; Liu et al., 2023b) that randomly replaces the conditioning information with null tokens (for more details, see B.2). Once the diffusion model is trained, we can extract a 3D representation for an object by optimizing an Instant-NGP (Müller et al., 2022; Tang, 2022) using the neural mode seeking objective discussed in section 3.3. We use DDIM (Song et al., 2020) for fast multi-step denoising. Inspired by Wang et al. (2023b), we follow an annealed time schedule for score distillation. We also use some regularization losses while training the NeRF as used in Zhou & Tulsiani (2023). For more details, please refer to section B.3. 4 Experiments 4.1 Experimental Setup 4.1.1 Dataset We train and evaluate our models on Co3Dv2 (Reizenstein et al., 2021), a large-scale dataset with real multi-view images of objects from 51 categories. Following (Zhang et al., 2022; Lin et al., 2023), we train our model on 41 categories and hold out 10 categories to test the ability of our method to generalize to unseen categories. We use the fewview-train split for training and fewview-dev split for evaluation. We limit our focus to modelling only objects and not their backgrounds. To this end, we create a white background for our objects by using the masks available in the dataset. As our full method (as well as some baselines) optimize instance-specific neural representations, which can take 1hr per instance, we limit our evaluations to 5 object instances per category. We note that popular state-of-the-art single-view baselines are trained on Objaverse (Deitke et al., 2023). Hence, to allow fair comparison, we fine-tune a version of our model (which are already pre-trained on Co3Dv2) on Objaverse renderings as well. We denote versions of our model fine-tuned on Objaverse with † as a superscript (for example, UpFusion† (3D)). Figure 4: **Qualitative comparison with sparse-view baselines** We compare UpFusion with baseline methods using 3 and 6 unposed images as inputs. SparseFusion fails to capture the correct geometry, due to the imperfect camera poses estimated by RelPose++. UpSRT generates blurry results due to the nature of regression-based methods. On the contrary, UpFusion 2D synthesizes sharp outputs with correct object poses. UpFusion 3D further improves the 3D consistency. | Type | Method | PSNR-A (↑) | SSIM-A (↑) | LPIPS-A (↓) | |----------|-------------------------|------------|------------|-------------| | | | 1V | 3V | 6V | 1V | 3V | 6V | 1V | 3V | 6V | | Posed | SparseFusion (GT) | — | 22.41 | 24.02 | — | 0.79 | 0.81 | — | 0.20 | 0.18 | | Unposed | SparseFusion (RelPose++)| — | 17.76 | 17.12 | — | 0.67 | 0.64 | — | 0.30 | 0.33 | | | UpSRT | 16.84 | 17.75 | 18.36 | 0.73 | 0.74 | 0.75 | 0.34 | 0.32 | 0.31 | | | UpFusion (2D) | 16.54 | 17.12 | 17.41 | 0.71 | 0.72 | 0.73 | 0.23 | 0.22 | 0.22 | | | UpFusion (3D) | 18.17 | 18.68 | 18.96 | 0.75 | 0.76 | 0.76 | 0.22 | 0.21 | 0.21 | Table 1: **Sparse-view synthesis evaluation on seen categories (41 categories).** We conduct comparisons using 5 samples per category and then report the average across these. UpFusion performs favorably against baseline methods, and demonstrates the capability to improve the results when more views are provided. Moreover, UpFusion 3D consistently improves the results from UpFusion 2D. ### 4.1.2 Evaluating View Synthesis in Unposed Settings We are interested in evaluating our performance using standard view-synthesis metrics such as PSNR, SSIM, and LPIPS (Zhang et al., 2018). However, these pixel-aligned metrics are not well suited for evaluating unposed view synthesis due to the fundamental ambiguities between the coordinate systems of the ground-truth and prediction. In particular, given unposed images, there can be an ambiguity up to a similarity transform between the coordinate frames of the reconstruction and prediction. While anchoring the coordinate orientation to the first camera reduces this uncertainty, we still need to consider scaling and shift between predictions and ground truth. We highlight this issue in Figure 12 where we observe that despite generally matching the ground truth, the prediction is misaligned in pixel space. To circumvent this issue, we compute aligned... Figure 5: **Generalization beyond training categories.** We show results for UpFusion (3D) across object categories not seen in training. For each instance, we present the 1, 3, or 6 unposed input views (left), as well as 4 novel view renderings (right). We observe that despite not being trained on these categories, UpFusion is able to accurately infer the underlying 3D structure and generate detailed novel views. versions of the standard image reconstructions metrics (PSNR-A, SSIM-A, and LPIPS-A) by first optimizing for an affine image warping transform $W_A$ that best matches a predicted image to its corresponding ground truth and then computing the metric. In other words, we evaluate aligned metrics as $\min_{W_A} M(V_A(x), y)$, where $M$ is a metric, $x$ is a predicted image and $y$ is the ground truth image. In practice, for expediency, we compute the optimal transform for minimizing a pixel-wise L2 error instead of computing a per-metric warp. ### 4.1.3 Baselines We highlight the benefits of our approach by comparing it to prior pose-dependent and unposed novel-view generation techniques. Specifically, we compare our 2D diffusion model (‘UpFusion 2D’) and obtained 3D representations (‘UpFusion 3D’) against the following baselines: **SparseFusion** (Zhou & Tulsiani, 2023) is a current state-of-the-art method for pose-dependent sparse-view inference on Co3Dv2. We compare against its performance when using a recent sparse-view pose estimation system RelPose++ (Lin et al., 2023), and also report its performance using GT camera poses as an upper bound. **UpSRT.** As a representative approach for view synthesis from unposed images, we compare against the prediction from the UpSRT (Sajjadi et al., 2022) backbone used in our approach. **FORGE** (Jiang et al., 2022) is a method that jointly optimizes for poses while being trained on a novel-view synthesis objective. As FORGE uses the GSO dataset (Downs et al., 2022) to demonstrate its generalization capability, we compare it against our Objaverse fine-tuned UpFusion† (3D). **Single-view methods.** To highlight the benefit of using more input views, we compare UpFusion† (3D) to two representative state-of-the-art single-view baselines: Zero-1-to-3 (Liu et al., 2023b) and One-2-3-45 (Liu et al., 2023a). For Zero-1-to-3, we include comparisons with two versions – the original version which uses SJC (Wang et al., 2023a) and the highly optimized threestudio implementation (Guo et al., 2023) (which uses additional tricks to aid 3D distillation). We compare against these baselines on the GSO dataset. ### 4.2 Results #### 4.2.1 Novel-view synthesis on CO3Dv2 **Comparisons against Sparse-view Methods.** We compare UpFusion with baseline methods on the categories seen during training, as shown in Table 1. UpFusion performs favorably against both UpSRT and unposed SparseFusion. Furthermore, UpFusion consistently improves the prediction when more views are provided. However, there is still room for improvement compared to the methods using ground-truth poses. In figure 4, we qualitatively present the novel view synthesis | Method | PSNR-A (↑) | SSIM-A (↑) | LPIPS-A (↓) | |-----------------|------------|------------|-------------| | | 1V | 3V | 6V | 1V | 3V | 6V | 1V | 3V | 6V | | UpSRT | 16.75 | 17.57 | 18.06 | 0.73 | 0.74 | 0.74 | 0.35 | 0.33 | 0.32 | | UpFusion (2D) | 16.33 | 17.04 | 17.38 | 0.70 | 0.71 | 0.72 | 0.25 | 0.23 | 0.23 | | UpFusion (3D) | 18.27 | 18.83 | 19.11 | 0.75 | 0.76 | 0.76 | 0.23 | 0.22 | 0.22 | Table 2: Sparse-view synthesis evaluation on unseen categories (10 categories). We conduct comparisons using 5 samples per category and report the average across these. We observe a comparable performance to the results on seen categories. | # Input Views | Method | PSNR (↑) | SSIM (↑) | LPIPS (↓) | |---------------|-------------------------|----------|----------|-----------| | | Zero-1-to-3 (SJC) | 18.72 | 0.90 | 0.12 | | 1V | Zero-1-to-3 (TS) | 21.71 | 0.91 | 0.09 | | | One-2-3-45 | 17.77 | 0.87 | 0.15 | | | UpFusion† (3D) | 20.52 | 0.89 | 0.12 | | 6V | FORGE | 17.40 | 0.88 | 0.15 | | | UpFusion† (3D) | 22.51 | 0.91 | 0.08 | Table 3: Novel-view synthesis evaluation on GSO. We compare UpFusion 3D to single-view baselines as well as a sparse-view pose-optimization baseline on GSO dataset which is out of distribution for all methods. SparseFusion can capture some details visible in the input views but largely suffers due to the error in input poses. UpSRT, on the other hand, can robustly generate coarse renderings, but is unable to synthesize high-fidelity outputs from any viewpoints. Our 2D diffusion model, UpFusion 2D, generates higher fidelity images that improve over the baselines in the perceptual metrics. Finally, the 3D-consistent inferred representation Upfusion3D yields the best results. Characterizing Generalization. As UpFusion is trained upon a pre-trained large-scale diffusion model providing strong general priors, the learned novel view synthesis capability is expected to be generalized to categories beyond training. We evaluate UpFusion on 10 unseen categories, as shown in Table 2. Encouragingly, we find that the performance does not degrade compared to the results on seen categories and believe this highlights the potential of our approach to perform in-the-wild sparse-view 3D inference. We also depict some qualitative results on unseen objects in Figure 5. 4.2.2 NOVEL-VIEW SYNTHESIS ON GSO We compare UpFusion† (3D) to two state-of-the-art single-view baselines (Zero-1-to-3 and One-2-3-45) and a sparse-view baseline (FORGE) on 20 randomly sampled instances from the GSO dataset. For Zero-1-to-3, we compare with both the original SJC implementation and threestudio (TS) implementation. From table 3, we can observe that UpFusion† (3D) while using 6 inputs views is able to outperform all baselines. This demonstrates the ability of our method to effectively incorporate more information when additional views are available, which single-view baselines cannot. Moreover, we can see that our model significantly outperforms FORGE, which also uses 6 input views, and we believe this is because our approach allows bypassing explicit pose prediction which can lead to inaccurate predictions. Qualitative comparisons in Figure 6 further demonstrates the effectiveness of our approach in utilizing information from multiple unposed images. 5 DISCUSSION We presented UpFusion, an approach for novel-view synthesis and 3D inference given unposed sparse views. While our approach provided a mechanism for effectively leveraging unposed images as context, we believe that several challenges still remain towards the goal of sparse-view 3D inference in-the-wild. In particular, although our approach allowed high-fidelity 2D generations, these are not always precisely consistent with the details in the (implicitly used) input views. Moreover, while our approach’s performance does improve given additional context views, it does not exhibit Figure 6: **Qualitative comparison on GSO.** We compare UpFusion$^\dagger$ (3D) to two single-view baselines and one sparse-view baseline (FORGE) on the GSO dataset. For each instance, single-view methods use only the image with the black border as input, whereas sparse-view methods use all input images. We can observe that UpFusion$^\dagger$ (3D) while using 6 inputs views is able to better understand the 3D structure of the object than the single-view baselines. Moreover, it is able to incorporate information from the 6 inputs views much better than the sparse-view baseline. a strong scaling similar to pose-aware methods that can geometrically identify relevant aspects of input images. Finally, while our work provided a possible path for 3D inference from unposed views by sidestepping the task of pose estimation, it remains an open question whether explicit pose inference for 3D estimation might be helpful in the long term.
9Gvs64deOj
Can the authors explain why it is particularly interesting to train clients in the wireless setting and include the noisy channel in their estimation rather than using wireless communication protocols to encode and decode messages (if needed) and then conducting FedAvg or variants of FedAvg? By construction of the algorithm, the clients send much less but much more frequently. Why and when is this a more interesting approach?
Rendering Wireless Environments Useful for Gradient Estimators: A Zero-Order Stochastic Federated Learning Method Anonymous authors Paper under double-blind review Abstract Federated learning (FL) is a novel approach to machine learning that allows multiple edge devices to collaboratively train a model without disclosing their raw data. However, several challenges hinder the practical implementation of this approach, especially when devices and the server communicate over wireless channels, as it suffers from communication and computation bottlenecks in this case. By utilizing a communication-efficient framework, we propose a novel zero-order (ZO) method with two types of gradient estimators, one-point and two-point, that harnesses the nature of the wireless communication channel without requiring the knowledge of the channel state coefficient. It is the first method that includes the wireless channel in the learning algorithm itself instead of wasting resources to analyze it and remove its impact. The two main difficulties of this work are that in FL, the objective function is usually not convex, which makes the extension of FL to ZO methods challenging, and that including the impact of wireless channels requires extra attention. However, we overcome these difficulties and comprehensively analyze the proposed zero-order federated learning (ZOF-L) framework. We establish its convergence theoretically, and we prove a convergence rate of $O(\frac{1}{\sqrt{K}})$ with the one-point estimate and $O(\frac{1}{\sqrt{K}})$ with the two-point one in the nonconvex setting. We further demonstrate the potential of our algorithms with experimental results, taking into account independent and identically distributed (IID) and non-IID device data distributions. 1 Introduction Zero-order (ZO) methods are a subfield of optimization that assume that first-order (FO) information or access to function gradients is unavailable. ZO optimization is based on estimating the gradient using function values queried at a certain number of points. The number of function queries depends on the assumptions of the problem. For example, in multi-point gradient estimates (Duchi et al., 2015; Agarwal et al., 2010), they construct the gradient by performing the difference of function values obtained at many random or predefined points. However, they assume that the stochastic setting stays the same during all these queries. For example, for functions $\theta \mapsto f(\theta, S)$ subject to a stochastic variable $S$, two-point gradient estimates have the form, $$g = \frac{d}{d\gamma} \left( f(\theta + \gamma \Phi, S) - f(\theta - \gamma \Phi, S) \right) \Phi,$$ with $\theta \in \mathbb{R}^d$ the optimization variable, $\gamma > 0$ a small value, and $\Phi$ a random vector with a symmetric distribution. By contrast, one-point estimates that use only one function value (Flaxman et al., 2004; Li & Assaad, 2021; Mhanna & Assaad, 2022), principally obtained at a random point, $$g = \frac{d}{d\gamma} f(\theta + \gamma \Phi, S)\Phi,$$ assume that the settings are continuously changing during optimization. This is an important property as it resonates with many realistic applications, like when the optimization is performed in wireless environments or is based on previous simulation results. Recently, an appeal to ZO optimization is emerging in the machine-learning community, where optimizers are based on gradient methods. Examples include reinforcement learning (Vemula et al., 2019; Malik et al., 2019), generating contrastive explanations for black-box classification models (Dhurandhar et al., 2019), and effecting adversarial perturbations on such models (Ilyas et al., 2018; Chen et al., 2019). On the other hand, with the massive amounts of data generated or accessed by mobile devices, a growing research interest in both sectors of academia and industry (Bonawitz et al., 2019) is focused on federated learning (FL) (McMahan et al., 2017), as it’s a practical solution for training models on such data without the need to log them to a server. A lot of effort has been invested in developing first-order (McMahan et al., 2017; Zhang et al., 2021; Wang et al., 2021) and second-order (Elgabli et al., 2022; Li et al., 2019) methods to improve the efficacy of FL. These methods typically require access to the gradient or the Hessian of the local objective functions in their implementation to solve the optimization problem. However, using and exchanging such information raises many challenges, such as expensive communication and computation and privacy concerns (Li et al., 2020). There’s more interest recently in learning over wireless environments (Yang et al., 2020; Amiri & Gündüz, 2020; Sery & Cohen, 2020; Guo et al., 2021; Sery et al., 2021), with the increase of devices connected to servers through cellular networks. In this paper, we are interested in this scenario illustrated in Figure 1. Similarly to the aforementioned work, we are examining the case of analog communications between the devices and the server. However, it’s a challenging problem as when the information is sent over the wireless channel, it becomes subject to a perturbation induced by the channel. This perturbation is not limited to additive noise, as noise is, in fact, due to thermal changes at the receiver. The channel acts as a filter for the transmitted signal (Tse & Viswanath, 2005; Björnson & Sanguinetti, 2020), $$\hat{x} = Hx + n$$ where $x$ and $\hat{x} \in \mathbb{R}^d$ are the sent and received signals, respectively. $H \in \mathbb{R}^{d \times d}$ is the channel matrix, and $n \in \mathbb{R}^d$ is the additive noise, both of which are stochastic, constantly changing, and unknown. We elaborate further on the channel modeling and why we can consider it real in Appendix A for the interested reader. In federated learning, $x$ may denote the model or its gradients sent over the channel. To remove this impact, every channel element must be analyzed and removed to retrieve the sent information. This analysis is costly in computation and time resources. Thus, here our objective is to study federated learning in wireless environments without wasting resources. Further, we’re interested in exploring the potential of ZO optimization to deal with some of the difficulties demonstrated by FL. We then consider a federated learning setting where a central server coordinates with $N$ edge devices to solve an optimization problem collaboratively. The data is private to every device and the exchanges between the server and the devices is restricted to the optimization parameters. To that end, let $\mathcal{N} = \{1, ..., N\}$ be the set of devices and $\theta \in \mathbb{R}^d$ denote the global model. We define $F_i : \mathbb{R}^d \rightarrow \mathbb{R}$ as the loss function associated with the local data stored on device $i$, $\forall i \in \mathcal{N}$. The objective is to minimize the function $F : \mathbb{R}^d \rightarrow \mathbb{R}$ that is composed of the said devices’ loss functions, such that $$\min_{\theta \in \mathbb{R}^d} F(\theta) := \sum_{i=1}^{N} F_i(\theta) \quad \text{with} \quad F_i(\theta) = \mathbb{E}_{S_i \sim D_i} f_i(\theta, S_i).$$ $S_i$ is an i.i.d. ergodic stochastic process following a local distribution $D_i$. $S_i$ is used to model various stochastic perturbation, e.g. local data distribution among others. We further consider the case where the devices do not have access to their gradients for computational and communication restraints, and they must estimate this gradient by querying their model only once per update. They obtain a scalar value from this query, that they must send back to the server. 1.1 Motivation for our work In this subsection, we describe the various challenges in FL and how our method varies from previous work in dealing with these challenges. Communication bottleneck. In general, the main idea of federated learning is that the devices receive the model from the server, make use of their data to update the gradient, and then send back their gradients without ever disclosing their data. The server then updates the model using the collected and averaged gradients, and the process repeats. Since the gradients have the same dimension as the model, in every uplink step, there are $Nd$ values that need to be uploaded, which forms a fundamental communication bottleneck in FL. To deal with this issue, some propose local multiple gradient descent steps to be done by the devices before sending their gradients back to the server to save communication resources (Khaled et al., 2020), or allow partial device participation at every iteration (Chen et al., 2018), or both (McMahan et al., 2017). Others propose lossy compression of the gradient before uploading it to the server. For example, all (Konečný et al., 2016; Khirirat et al., 2018; Elgabli et al., 2020) suggest stochastic unbiased quantization approaches, where gradients are approximated with a finite set of discrete value for efficiency. Mishchenko et al. (2019) propose the quantization of gradient differences of the current and previous iterations, allowing the update to incorporate new information, while Chen et al. (2022) propose the sparsification of this difference. Sparsification means that if a vector component is not large enough, it will not be transmitted. Channel impact. In federated learning over wireless channels, there’s a problem with channel knowledge. When the devices upload their gradient $q \in \mathbb{R}^d$ to the server, the server receives $Hq + n$ as shown in equation (1). In Yang et al. (2020) and Fang et al. (2022) and all references within, they assume that they can remove the impact of the channel. However, as the channel matrix $H$ coefficients follow a stochastic process and there are two unknown received entities, the channel $H$ and the gradient, the knowledge of the gradient requires estimating the channel coefficients at each iteration of the FL. This requires computation resources, and more importantly, it requires resources to exchange control/reference signals between the devices and the server at each time/iteration to estimate the channel coefficients $H$. Alternatively, our work offers a much simpler approach. We don’t waste resources trying to analyze the channel. We use the channel in the implementation itself. It is part of the learning. We harness it to construct our gradient estimate without the need to remove its impact, saving both computation and communication resources. Computation demands. Unlike standard methods that rely on the computational capabilities of participating devices, our approach is less demanding. Devices simply receive the global model, query it with their data, and send back the scalar loss, eliminating the need for "backward pass" computation. Only the "forward pass" is performed. Black-box optimization in FL. One motivation for employing ZO methods is black-box problems (Fang et al., 2022) when gradient information cannot be acquired or is complicated to compute. For example, in hyperparameter tuning, gradients cannot be calculated, as there isn’t an analytic relationship between the loss function and the hyperparameters (Dai et al., 2020). 1.2 Challenges and contribution Addressing nonconvexity in FL is challenging. Our ZO method must handle nonconvexity, noise, and stochasticity efficiently, which can slow down convergence in gradient techniques. Additionally, the channel’s introduction adds uncertainty and constraints on the number of communication exchanges. We need to ensure consistent and reliable performance, considering unknown probability distributions and fewer function evaluations. Unlike convex cases, nonconvex optimization doesn’t allow easy quantification of optimization progress. Verifying gradient convergence becomes intricate due to biased gradient estimates. Moreover, unbounded variance in one-point estimates can lead to significant gradient deviations. These challenges involve technical and intuitive complexities we navigate. In this work, we overcome these difficulties and propose a new communication-efficient algorithm in the nonconvex setting. This algorithm differs from the standard gradient method as it entails two reception-update steps instead of one, and it’s not a simple extension of FO to ZO where the devices still have to upload their full model/gradient, as is the case in Fang et al. (2022). By limiting the exchange to scalar-valued updates, we counter the communication bottleneck, and we save up to a factor of $O(d)$, in comparison to standard methods, in terms of total exchanges of variables between the devices and the server, saving a lot of execution time and allowing the convergence rate to compete with the standard FO method. We harness the nature of autocorrelated channels for truly "blind" reception of the data. We prove the convergence theoretically with one-point and two-point estimates and provide experimental evidence. An important distinction worth noting is that standard ZO methods establish convergence by focusing on the expected convergence of the exact gradient. In contrast to prior research, our approach goes further in the proof. We demonstrate the convergence of the exact gradient itself almost surely, not solely its expected value. The key element in this proof is employing Doob’s martingale inequality to constrain the stochastic error resulting from estimated gradients. We finally extend the analysis to non-symmetrical channel models, i.e., channels without zero-mean, and thus provide a practical algorithm for general settings. 2 ALGORITHMS This section illustrates our proposed zero-order stochastic federated learning algorithms with different gradient estimators (ZOFL). 2.1 THE 1P-ZOFL ALGORITHM **Algorithm 1** The 1P-ZOFL algorithm **Input:** Initial model $\theta_0 \in \mathbb{R}^d$, the initial step-sizes $\alpha_0$ and $\gamma_0$, and the channels’ standard deviation $\sigma_h$ 1: **for** $k = 0, 2, 4, ...$ **do** 2: The server receives $\sum_{j=1}^{N} \frac{h_{j,k}}{\sigma_h} + n_{j,k}$ 3: The server broadcasts $\theta_k + \gamma_k \Phi_k \sum_{j=1}^{N} \left( \frac{h_{j,k}}{\sigma_h} + n_{j,k} \right)$ to all devices 4: The server receives $\sum_{i=1}^{N} h_{i,k+1} \tilde{f}_i \left( \theta_k + \gamma_k \Phi_k \sum_{j=1}^{N} \left( \frac{h_{j,k}}{\sigma_h} + n_{j,k} \right), S_{i,k+1} \right) + n_{i,k+1}$ 5: The server multiplies the received scalar sum by $\Phi_k$ to assemble $g_k^{(1P)}$ given in (3) 6: The server updates $\theta_{k+1} = \theta_k - \alpha_k g_k^{(1P)}$ 7: **end for** We consider an intermediary wireless environment between the server and each device $i$ for $i \in \mathcal{N}$ as shown in Figure 1. Wireless channels introduce a stochastic scaling on the sent signal as elaborated in equation (1). As we only send a scalar value over the channel at a time, our channel has only one scalar coefficient in addition to a scalar noise. Channel coefficients are usually autocorrelated from one timeslot to the next. Let $h_{i,k}$ denote the channel scaling affecting the sent signal from device $i$ to the server at timeslot $k$, independent from all other devices’ channels. We assume $h_{i,k}$ to be a zero-mean random variable with standard deviation $\sigma_h$, $\forall i \in \mathcal{N}, \forall k \in \mathbb{N}^+$, and $n_{i,k}$ an additive noise on the transmitted signal. Assuming that the channel is time-correlated for two consecutive iterations $k$ and $k + 1$, such that the autocovariance is $\mathbb{E}[h_{i,k}h_{i,k+1}] = K_{hh}$, $\forall i \in \mathcal{N}, \forall k \in \mathbb{N}^+$, we present our first learning method in Algorithm 1. The devices must carry out two communication steps. In the first, every device sends the value $\frac{1}{\sigma_h}$ to the server. According to equation (1), the server receives $\frac{h_{j,k}}{\sigma_h} + n_{j,k}$ from every device $j$. Hence, it receives the sum in step 2. Afterward, the server uses the values received to adjust the model and broadcasts it to the devices. When device $i$ receives the model, it receives $h_{i,k+1}^{DL} [\theta_k + \gamma_k \Phi_k \sum_{j=1}^{N} (\frac{h_{j,k}}{\sigma_h} + n_{j,k})] + n_{i,k+1}^{DL}$, and to simplify notation, we let the stochastic vector $[h_{i,k+1}^{DL}, n_{i,k+1}^{DL}]$ be included within the big vector $S_{i,k+1}$ of stochastic perturbations. Device $i$ then queries this received model to obtain the stochastic loss $\tilde{f}_i$. Then the devices send $f$ to the server in the second communication step, and according to equation (1), the server receives the quantity indicated in step 5. Finally, the server assembles the gradient estimate and is able to update $\theta$ according to step 7. All transmissions are subject to channel scaling and additive noise. We designate them by $h$ and $n$ in the device-to-server transmission. In the server-to-devices one, we designate them by $S$. We let $\tilde{f}_i = \frac{f_i}{\sigma_h}$ be the normalized loss function and define $\alpha_k$ and $\gamma_k$ as two step-sizes and $\Phi_k \in \mathbb{R}^d$ as a perturbant vector generated by the server that has the same dimension as that of the model. We emphasize here that $g^{(1P)}_k$ (in step 6) is the gradient estimate in this case, and one can see that the impact of the channel is included in the gradient estimate and hence in the learning. The major advantage of this algorithm is that each device sends only two scalar values. This is stark improvement in communication efficiency over standard federated learning algorithms that require each device to send back the whole model or local gradient of dimension $d$. In effect, it’s resource draining and can be unrealistic to assume it’s possible. We show in the numerical results that there is a considerable delay difference in favor of our method. ### 2.2 The 2P-ZOFL Algorithm **Algorithm 2** The 2P-ZOFL algorithm **Input:** Initial model $\theta_0 \in \mathbb{R}^d$, the initial step-sizes $\alpha_0$ and $\gamma_0$, and the channels’ standard deviation $\sigma_h$ 1. **for** $k = 0, 2, 4, ...$ **do** 2. The server receives $\sum_{j=1}^{N} \frac{h_{j,k}}{\sigma_h}$ 3. The server broadcasts $\theta_k + \gamma_k \Phi_k \sum_{j=1}^{N} \frac{h_{j,k}}{\sigma_h}$ and $\theta_k - \gamma_k \Phi_k \sum_{j=1}^{N} \frac{h_{j,k}}{\sigma_h}$ to all devices under the same stochastic wireless environment 4. The server receives $\sum_{i=1}^{N} h_{i,k+1} \left[ \tilde{f}_i \left( \theta_k + \gamma_k \Phi_k \sum_{j=1}^{N} \frac{h_{j,k}}{\sigma_h}, S_{i,k+1} \right) - \tilde{f}_i \left( \theta_k - \gamma_k \Phi_k \sum_{j=1}^{N} \frac{h_{j,k}}{\sigma_h}, S_{i,k+1} \right) \right]$ 5. The server multiplies the received scalar sum by $\Phi_k$ to assemble $g^{(2P)}_k$ given in (4) 6. The server updates $\theta_{k+1} = \theta_k - \alpha_k g^{(2P)}_k$ 7. **end for** For our second method, we aim to assemble and optimize with a two-point gradient estimate. Similarly to 1P-ZOFL, there are two communication steps. The only difference is that the server has to adjust the model twice based on the devices’ feedback and broadcast the model with both adjustments. We consider that the additive noise is negligible and that the wireless environment is slowly changing. The upload communication efficiency is unaffected by the change of estimate as the functional difference is still a scalar value. ### 2.3 The Estimated Gradients We provide here analysis of our ZO gradient estimates. We propose the one-point estimate: $$g^{(1P)}_k = \Phi_k \sum_{i=1}^{N} \left[ h_{i,k+1} \tilde{f}_i \left( \theta_k + \gamma_k \Phi_k \sum_{j=1}^{N} \frac{h_{j,k}}{\sigma_h} + n_{j,k}, S_{i,k+1} \right) + n_{i,k+1} \right],$$ where $h_{i,k}$, $h_{i,k+1}$, and the noise remain unknown. This saves computation complexity and is very communication efficient as it transcends the need to send pilot signals to estimate the channel continuously. In fact, it’s unrealistic to assume that the instantaneous channel can be evaluated as wireless environments typically change every $1 - 2$ ms. In certain scenarios when the stochastic environment is changing more slowly, where the devices can query two consecutive loss functions under the same circumstances, we can use two-point estimates instead of one-point ones. In other words, whenever the server can broadcast two successive model versions under the same conditions, i.e. same $S_{i,k+1}$, our estimate can take the following form: $$g^{(2P)}_k = \Phi_k \sum_{i=1}^{N} h_{i,k+1} \left[ \tilde{f}_i \left( \theta_k + \gamma_k \Phi_k \sum_{j=1}^{N} \frac{h_{j,k}}{\sigma_h}, S_{i,k+1} \right) - \tilde{f}_i \left( \theta_k - \gamma_k \Phi_k \sum_{j=1}^{N} \frac{h_{j,k}}{\sigma_h}, S_{i,k+1} \right) \right].$$ The added advantage of two-point estimates is that they increase the convergence rate as their variance w.r.t. the exact gradient is generally bounded. However, this advantage is only possible if we recognize the added noise at reception as negligible. We next consider the following assumptions on the additive noise, the perturbation vector, and the local loss functions. **Assumption 1** \( n_{i,k} \) is assumed to be a zero-mean uncorrelated noise with bounded variance, meaning \( E(n_{i,k}) = 0 \) and \( E(n_{i,k}^2) = \sigma_n^2 < \infty, \forall i \in N, \forall k \in \mathbb{N}^+ \). For any timeslot \( k \), \( E(n_{i,k}n_{j,k}) = 0 \) if \( i \neq j \). For any device \( i \), \( E(n_{i,k}n_{i,k'}) = 0 \) if \( k \neq k' \). **Assumption 2** Let \( \Phi_k = (\phi_{k,1}, \phi_{k,2}, \ldots, \phi_{k,d})^T \). At each iteration \( k \), the server generates its \( \Phi_k \) vector independently from other iterations. In addition, the elements of \( \Phi_k \) are assumed i.i.d with \( E(\phi_{k,d_1}\phi_{k,d_2}) = 0 \) for \( d_1 \neq d_2 \) and there exists \( \alpha_2 > 0 \) such that \( E(\phi_{k,d_j}^2) = \alpha_2, \forall d_j, \forall k \). We further assume there exists a constant \( \alpha_3 > 0 \) where \( \| \Phi_k \| \leq \alpha_3, \forall k \). **Example 1** An example of a perturbation vector satisfying Assumption 2 is picking every dimension of \( \Phi_k \) from \( \left\{ -\frac{1}{\sqrt{d}}, \frac{1}{\sqrt{d}} \right\} \) with equal probability. Then, \( \alpha_2 = \frac{1}{d} \) and \( \alpha_3 = 1 \). **Assumption 3** All loss functions \( \theta \mapsto f_i(\theta, S_i) \) are Lipschitz continuous with Lipschitz constant \( L_{S_i} \), \( |f_i(\theta, S_i) - f_i(\theta', S_i)| \leq L_{S_i} \|\theta - \theta'\|, \forall i \in N \). In addition, \( E_{S_i}[f_i(\theta, S_i)] < \infty, \forall i \in N \). Let \( H_k = \{\theta_0, S_0, \theta_1, S_1, \ldots, \theta_k, S_k\} \) denote the history sequence, then the following two Lemmas characterize our gradient estimates. **Lemma 1** Let Assumptions 1 and 2 be satisfied and define the scalar values \( c_1 = \alpha_2 \frac{K_h}{\sigma_h} \) and \( c'_1 = 2c_1 \), then both gradient estimators are biased w.r.t. the objective function’s exact gradient \( \nabla F(\theta) \). Concretely, \( E[g^{(1P)}_k | H_k] = c_1 \gamma_k (\nabla F(\theta_k) + b_k) \) and \( E[g^{(2P)}_k | H_k] = c'_1 \gamma_k (\nabla F(\theta_k) + b'_k) \), \( \forall k \in \mathbb{N}^+ \), where \( b_k \) and \( b'_k \) are the bias terms. **Proof:** Refer to Appendix B.1. **Lemma 2** Let Assumptions 1, 3 and the inequality \( \|\theta_k\| < \infty \) hold almost surely. There exist two bounded constants \( c_2, c'_2 > 0 \), such that \( E[\|g^{(1P)}_k\|^2 | H_k] \leq c_2 \) and \( E[\|g^{(2P)}_k\|^2 | H_k] \leq c'_2 \gamma_k^2 \) almost surely. **Proof:** Refer to Appendix B.2. ### 3 CONVERGENCE ANALYSIS This section analyzes the behavior of our algorithms in the nonconvex setting. Assuming that a global minimizer \( \theta^* \in \mathbb{R}^d \) exists such that \( \min_{\theta \in \mathbb{R}^d} F(\theta) = F(\theta^*) > -\infty \) and \( \nabla F(\theta^*) = 0 \), we start by introducing a general necessary assumption and two estimate-specific assumptions in the subsections. **Assumption 4** We assume the existence and the continuity of \( \nabla F_i(\theta) \) and \( \nabla^2 F_i(\theta) \), and that there exists a constant \( \alpha_1 > 0 \) such that \( \|\nabla^2 F_i(\theta)\|_2 \leq \alpha_1, \forall i \in N \). **Lemma 3** By Assumption 4, we know that the objective function \( \theta \mapsto F(\theta) \) is \( L \)-smooth for some positive constant \( L \), \( \|\nabla F(\theta) - \nabla F(\theta')\| \leq L \|\theta - \theta'\|, \forall \theta, \theta' \in \mathbb{R}^d \), or equivalently, \( F(\theta) \leq F(\theta') + \langle \nabla F(\theta'), \theta - \theta' \rangle + \frac{L}{2} \|\theta - \theta'\|^2 \). **Lemma 4** By Assumptions 1, 2 and 4, we can find two scalar values \( c_3, c'_3 > 0 \) such that \( \|b_k\| \leq c_3 \gamma_k \) and \( \|b'_k\| \leq c'_3 \gamma_k \). **Proof:** Refer to Appendix B.3. #### 3.1 IP-ZOFL CONVERGENCE As we deal with stochastic environments, we inevitably analyze the expectation over all possible variable outcomes. From Lemma 1, we see that in expectation, our estimator deviates from the gradient direction by the bias term. To provide that these terms don’t grow larger and preferably grow smaller as the algorithms evolve, we impose that \( \gamma_k \) vanishes. Additionally, to ensure that the expected norm squared of the estimator, as shown in Lemma 2, doesn’t accumulate residual constant terms, we impose that the step size \( \alpha_k \) vanishes. The series properties in the following assumption come from the recursive analysis of the algorithm. **Assumption 5** Both the step sizes \( \alpha_k \) and \( \gamma_k \) vanish to zero as \( k \to \infty \) and the following series composed of them satisfy the convergence assumptions \( \sum_{k=0}^{\infty} \alpha_k \gamma_k = \infty \), \( \sum_{k=0}^{\infty} \alpha_k \gamma_k^3 < \infty \), and \( \sum_{k=0}^{\infty} \alpha_k^2 < \infty \). **Example 2** To satisfy Assumption 5, we consider the following form of the step sizes, \( \alpha_k = \alpha_0 (1 + k)^{-v_1} \) and \( \gamma_k = \gamma_0 (1 + k)^{-v_2} \) with \( v_1, v_2 > 0 \). Then, it’s sufficient to find \( v_1 \) and \( v_2 \) such that \( 0 < v_1 + v_2 \leq 1 \), \( v_1 + 3v_2 > 1 \), and \( v_1 > 0.5 \). We next define the stochastic error \( e_k^{(1P)} \) as the difference between the value of a single realization of \( g_k^{(1P)} \) and its conditional expectation given the history sequence, i.e., \( e_k^{(1P)} = g_k^{(1P)} - \mathbb{E}[g_k^{(1P)} | H_k] \). The study of this noise and how it evolves is essential for the analysis of the algorithm as it gives access to the exact gradient when examining the algorithm’s convergence behavior and permits us to prove that, in fact, the exact gradient converges to zero and not just the expectation of the exact gradient. This is a stronger convergence property, and it has not been done before in ZO nonconvex optimization to the best of our knowledge. The trick is to show that \( e_k^{(1P)} \) is a martingale difference sequence and to apply Doob’s martingale inequality to derive the following lemma. **Lemma 5** If all Assumptions 1-5 hold and \( \| \theta_k \| < \infty \) almost surely, then for any constant \( \nu > 0 \), we have \( \lim_{K \to \infty} \mathbb{P}(\sup_{K' \geq K} \| \sum_{k=K}^{K'} \alpha_k e_k^{(1P)} \| \geq \nu ) = 0 \). *Proof:* Refer to Appendix C.1 The smoothness inequality allows for the first main result, leading to the second in the following theorem. **Theorem 1** When Assumptions 1-5 hold and given \( H_k \), we have \( \sum_k \alpha_k \gamma_k \| \nabla F(\theta_k) \|^2 < +\infty \) and \( \lim_{k \to \infty} \| \nabla F(\theta_k) \| = 0 \) almost surely, meaning that the algorithm converges. *Proof:* Refer to Appendix C.2 Proof sketch: We substitute the algorithm’s updates in the second inequality of Lemma 3 and replace the estimate by its expectation and stochastic error. We then perform a recursive addition over the iterations \( k > 0 \). With Lemma 5, the conditions on the step sizes, and the upper bound estimate’s squared norm, we are able to find an upper bound on \( \sum_k \alpha_k \gamma_k \| \nabla F(\theta_k) \|^2 \) when \( k \) grows to \( \infty \). The next step is to consider the hypothesis \( \lim_{k \to \infty} \sup \| \nabla F(\theta_k) \| \geq \rho \), for \( \rho > 0 \), and prove that it contradicts with the first result. Define \( \delta_k = F(\theta_k) - F(\theta^*) \). We next find an upper bound on the convergence rate of Algorithm 1. **Theorem 2** Consider in addition to the assumptions in Theorem 7, that the step sizes are those of Example 2, with \( v_3 = v_1 + v_2 < 1 \). Then, we can write \[ \sum_k \alpha_k \gamma_k \mathbb{E}[\| \nabla F(\theta_k) \|^2] \leq \frac{(1 - v_3)}{(K + 2)^{1-v_3} - 1} \left( \frac{2\delta_0}{c_1 \alpha_0 \gamma_0} + \frac{c_2^2 \gamma_0^2 (v_1 + 3v_2)}{v_1 + 3v_2 - 1} + \frac{2c_2 L \alpha_0 v_1}{c_1 \gamma_0 (2v_1 - 1)} \right). \] (5) *Proof:* Refer to Appendix C.3 In Theorem 2, we see that the optimal choice of the exponents for the time-varying component \( O\left(\frac{1}{K^{1-v_1-v_2}}\right) \), is \( v_1 = \frac{1}{2} \) and \( v_2 = \frac{1}{6} \) for a rate of \( O\left(\frac{1}{\sqrt{K}}\right) \). However, to prevent the constant component from growing too large, it is recommended to choose slightly larger exponents of \( v_1 = \frac{1}{2} + \epsilon \) and \( v_2 = \frac{1}{6} + \epsilon \), where \( \epsilon \) is a small strictly positive value. This will result in a rate of \( O\left(\frac{1}{K^{3-\epsilon}}\right) \). 3.2 2P-ZOFL CONVERGENCE Similarly to the previous subsection, we introduce an assumption regarding step sizes. The only difference comes from the fact that the upper bound on the expected norm squared of the estimate scales as $\gamma_k^2$ in Lemma 2. While $\alpha_k$ no longer needs to vanish, this does not affect the convergence rate later, so we keep the same formulation. **Assumption 6** Both $\alpha_k \to 0$ and $\gamma_k \to 0$ as $k \to \infty$. Besides, $\sum_{k=0}^{\infty} \alpha_k \gamma_k = \infty$, $\sum_{k=0}^{\infty} \alpha_k \gamma_k^3 < \infty$, and $\sum_{k=0}^{\infty} \alpha_k^2 \gamma_k^2 < \infty$. **Example 3** Consider the same form as that in Example 2 $\alpha_k = \alpha_0 (1+k)^{-v_1}$ and $\gamma_k = \gamma_0 (1+k)^{-v_2}$, with $v_1, v_2 > 0$. To satisfy Assumption 6, find $v_1$ and $v_2$ such that $0 < v_1 + v_2 \leq 1$, $v_1 + 3v_2 > 1$, and $v_1 + v_2 > 0.5$. **Lemma 6** Similarly, let $e_k^{(2P)} = g_k^{(2P)} - E[g_k^{(2P)} | H_k]$. If Assumptions 1, 4, and 6 hold and $\|\theta_k\| < \infty$ almost surely, then for $\nu > 0$, we have $\lim_{K \to \infty} P(\sup_{K' \geq K} \| \sum_{k=K'}^{K} \alpha_k e_k^{(2P)} \| \geq \nu) = 0$. *Proof:* Refer to Appendix D.1. **Theorem 3** Then, when Assumptions 1, 4, and 6 hold, we have $\sum_k \alpha_k \gamma_k \| \nabla F(\theta_k) \|^2 < +\infty$ and $\lim_{k \to \infty} \| \nabla F(\theta_k) \| = 0$ given $H_k$, almost surely, meaning that the algorithm converges. *Proof:* Refer to Appendix D.2. **Theorem 4** In addition to the assumptions of Theorem 3, let the step sizes have the form of Example 3 with $v_3 = v_1 + v_2 < 1$. Then, $$ \frac{\sum_k \alpha_k \gamma_k E[\|\nabla F(\theta_k)\|^2]}{\sum_k \alpha_k \gamma_k} \leq \frac{(1 - v_3)}{(K + 2)^{1-v_3} - 1} \left( \frac{2\delta_0}{c_1 \alpha_0 \gamma_0} + \frac{c_2^2 \gamma_0^2 (v_1 + 3v_2)}{v_1 + 3v_2 - 1} + \frac{2c_2 L \alpha_0 \gamma_0 v_3}{c_1 (2v_3 - 1)} \right). $$ (6) *Proof:* Refer to Appendix D.3. In Theorem 4, the best exponents choice is $v_1 = v_2 = \frac{1}{4}$ which allows a rate of $O\left(\frac{1}{\sqrt{K}}\right)$. To avoid the constant part growing too large, we find an arbitrarily small $\epsilon > 0$ such that $v_1 = v_2 = \frac{1}{4} + \frac{\epsilon}{2}$ for a rate of $O\left(\frac{1}{K^{\frac{1}{2}-\epsilon}}\right)$. 3.3 NON-SYMMETRICAL CHANNELS CASE Assuming a non-symmetrical channel model with $E[h_{i,k}] = \mu_h$ and $\sigma_h^2 = E[h_{i,k}^2] - \mu_h^2$, $\forall i, \forall k$, we provide how our gradient estimates and algorithms can be adjusted in Appendix E to account for this case. In fact, non-symmetrical channel models (e.g., Rician) offer a simplification of both analysis and implementation in comparison to symmetrical models (e.g., Rayleigh), as the non-zero mean no longer cancels out the gradient, and the design is further independent of the autocorrelation of the channels. However, with this study, we provide a generalized solution that encompasses any channel model. 4 EXPERIMENTAL RESULTS For our experimental results, we ran our simulations on servers offered by our university with a Slurm workload manager. Our resources include 32 CPUs and 80GB memory over a cpu_long partition. All our codes are run in a Conda [Anaconda](https://anaconda.org) virtual environment using Pytorch (Version 2.0.0) [Paszke et al. (2022)] as the main library, and all datasets are accessed via Torchvision [Marcel et al. (2022)]. We test our algorithms in nonconvex binary image classification problems, and we compare them against the original federated learning algorithm FedAvg [McMahan et al. (2017)] with exact gradient and one local update per round. However, we don’t consider the effect of the channel or any noise/stochasticity for the FedAvg algorithm. All experiments are done for 100 devices and data batches of 10 images per user per round. Every communication round in the graphs include all steps 2 through 7 for both Algorithms 1 and 2. For the first example, we classify photos of the two digits "0" and "1" from the MNIST dataset (LeCun & Cortes [2005]) using a nonconvex logistic regression model with a regularization parameter of 0.001. All images are divided equally among the devices and are considered to be preprocessed by being compressed to have dimension $d = 10$ using a lossy autoencoder. We run our code on 50 simulations with different random model initializations testing the accuracy in every iteration against an independent test set. The graphs in Figure 2 are averaged over all these simulations. For the non-IID data distribution, we first sort the images according to their labels and then divide them among the devices. While we can see clearly the effect of the theoretical convergence rate, both of our algorithms perform consistently well with all the different random variations influencing every simulation. Considering non-IID data distribution seems to slow down both our algorithms slightly without a major effect on the final result. For the second example, we classify photos of "shirts" and "sneakers" from the FashionMNIST dataset (Xiao et al. [2017]) using a multilayer-perceptron with an input layer of 784 units and 2 hidden layers with 200 units each using ReLu activations and a final sigmoid activation (197602 parameters). We run our code on 30 simulations with different random model initializations and average the resulting accuracy against an independent test set. The non-IID distribution is generated as in the previous example. Similarly to McMahan et al. (2017), we plot each curve by taking the best value of test-set accuracy achieved over all prior rounds. The results are shown in Figure 3. While 1P-ZOFL takes longer time to converge, 2P-ZOFL performs fairly well. The main idea is that to converge, FedAvg requires 300 communication rounds while 2P-ZOFL requires 2000. However, by 300 rounds, each device will have uploaded $197602 \times 300 = 59280600$ scalar values to the server vs. $2000 \times 2 = 4000$. FedAvg will have 14820 more data per user. As wireless capacity is limited and there are other users using the medium, we can only send a certain amount of information per second. For a worst-case scenario where a scalar value needs one second to be uploaded, FedAvg will require around 4 more hours than 2P-ZOFL. It’s true that 2P-ZOFL’s convergence rate is smaller, but that doesn’t mean that it is slower due to the limited capacity of wireless link as explained above. We provide a quantitative comparison with another algorithm encompassing communication-efficient strategies (local SGD and partial device participation) in Appendix E.3. We provide all experimental details and parameter choices alongside an extra analysis of our algorithm’s performance relating to its independence of the noise variance in Appendix E. ![Figure 2: Accuracy evolution of 1P-ZOFL, 2P-ZOFL, and FedAvg for IID data and non-IID distribution in the logistic regression model.](image) ![Figure 3: Accuracy evolution of 1P-ZOFL, 2P-ZOFL, and FedAvg for IID and non-IID data distribution in the training example model.](image) 5 CONCLUSION This work considers a learning problem over wireless channels and proposes a new zero-order federated learning method with one-point and two-point gradient estimators. We limit the communication to scalar-valued feedback from the devices and incorporate the wireless channel in the learning algorithm. We provide theoretical and experimental evidence for convergence and find an upper bound on the convergence rate. REPRODUCIBILITY STATEMENT We have made diligent efforts to enhance the reproducibility of our research findings. In the main text and the accompanying appendix, we have provided comprehensive details of our experimental procedures, data preprocessing steps, and mathematical proofs to facilitate the replication of our work. All the datasets utilized in our experiments have been cited with references to their sources, and a complete description of the data processing steps is provided in the appendix. We are committed to transparency and encourage readers to refer to the relevant sections of this paper and the appendix for a detailed account of our methodology and data to facilitate reproducibility. REFERENCES Alekh Agarwal, Ofer Dekel, and Lin Xiao. Optimal algorithms for online convex optimization with multi-point bandit feedback. In COLT, 2010. Mohammad Mohammadi Amiri and Deniz Gündüz. Federated learning over wireless fading channels. IEEE Transactions on Wireless Communications, 19(5):3546–3557, 2020. doi: 10.1109/TWC.2020.2974748. Inc. Anaconda. Anaconda software distribution, https://www.anaconda.com/, 2022. Emil Björnson and Luca Sanguinetti. Making cell-free massive mimo competitive with mmse processing and centralized implementation. IEEE Transactions on Wireless Communications, 19(1):77–90, 2020. doi: 10.1109/TWC.2019.2941478. Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloé Kiddon, Jakub Konečný, Stefano Mazzocchi, Brendan McMahan, Timon Van Overveldt, David Petrou, Daniel Ramage, and Jason Roselander. Towards federated learning at scale: System design. In A. Talwalkar, V. Smith, and M. Zaharia (eds.), Proceedings of Machine Learning and Systems, volume 1, pp. 374–388, 2019. URL https://proceedings.mlsys.org/paper_files/paper/2019/file/bd686fd640be98efaae0091fa301e613-Paper.pdf Tianyi Chen, Georgios Giannakis, Tao Sun, and Wotao Yin. Lag: Lazily aggregated gradient for communication-efficient distributed learning. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/feecee9f1643651799ede2740927317a-Paper.pdf Xiangyi Chen, Sijia Liu, Kaidi Xu, Xingguo Li, Xue Lin, Mingyi Hong, and David Cox. Zoadamm: Zeroth-order adaptive momentum method for black-box optimization. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper_files/paper/2019/file/576d026223582a390cd323bef4bad026-Paper.pdf Yicheng Chen, Rick S. Blum, Martin Takáč, and Brian M. Sadler. Distributed learning with sparsified gradient differences. IEEE Journal of Selected Topics in Signal Processing, 16(3):585–600, 2022. doi: 10.1109/JSTSP.2022.3162989. Zhongxiang Dai, Bryan Kian Hsiang Low, and Patrick Jaillet. Federated bayesian optimization via thompson sampling. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 9687–9699. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/6dfe08eda761bd321f8a9b239f6f4ec3-Paper.pdf Amit Dhurandhar, Tejaswini Pedapati, Avinash Balakrishnan, Pin-Yu Chen, Karthikeyan Shanmugam, and Ruchir Puri. Model agnostic contrastive explanations for structured data, 2019. Joseph L. Doob. Stochastic processes. 1953.
qcigbR1UYA
Did the authors consider extending the work to more general tests where a test consists of passing $Y$ through a noisy channel of input alphabet $\mathcal{Y}$ (the set of possible value of $Y$) and of output alphabet \{0,1\}?
Performance Bounds for Active Binary Testing with Information Maximization Anonymous authors Paper under double-blind review Abstract In many applications like experimental design, group testing, medical diagnosis, and active testing, the state of a random variable $Y$ is revealed by successively observing the outcomes of binary tests about $Y$, where new tests are selected adaptively based on the history of outcomes observed so far. If the number of states of $Y$ is finite, the process ends when $Y$ can be predicted with a desired level of confidence or all available tests have been used. Finding the strategy that minimizes the expected number of tests needed to predict $Y$ is virtually impossible in most real applications due to high dimensions. Therefore, the commonly used strategy is the greedy heuristic of information maximization that selects tests sequentially in order of information gain. However, this can be far from optimal for certain families of tests. In this paper, we argue that in most practical settings, for a given set of tests, there exists a $0 \ll \delta \ll \frac{1}{2}$, such that in every iteration of the greedy strategy, the selected binary test will have conditional probability of being ‘true’, given the history, within $\delta$ units of one-half. Under this assumption, we first study the performance of the greedy strategy for the simpler case of oracle tests, that is, when all tests are functions of $Y$, and obtain tighter bounds than previously reported in literature. Subsequently, under the same assumption, we extend our analysis to incorporate noise in the test outcomes. In particular, we assume the outcomes are corrupted through a binary symmetric channel and obtain bounds on the expected number of tests needed to make accurate predictions. 1 Introduction Many applications of machine learning in science and engineering can be posed as an active testing problem of sequentially carrying out tests to predict a target variable $Y$ such that the expected number of tests needed is minimized. Perhaps the simplest example is the classical parlor game “twenty questions”, where the objective might be to identify a famous person one player thinks of (the $Y$ in this case) by asking the minimum number questions about $Y$ on average, where each of these questions can be viewed as a test about $Y$. Other examples include Bayesian optimal experimental design (Lindley 1956), sensor fault detection (Zheng et al., 2012) and medical diagnosis (Peng et al., 2018). Since computing the optimal sequence of tests for such scenarios is NP-complete in general (Hyafil & Rivest, 1976), the “greedy” heuristic of choosing tests in each iteration that reduce the uncertainty about $Y$ the most, given the outcomes observed so far, is commonly employed in practice. More precisely, this is mathematically equivalent to choosing the test whose outcome has maximum mutual information with $Y$ given the sequence of test outcomes observed so far and is popularly known as the Information Maximization (InfoMax) algorithm, which has found numerous uses in recent applications (Geman & Jedynak, 1996; Sznitman & Jedynak, 2010; Branson et al., 2014; Geman et al., 2015; Ma et al., 2018; Foster et al., 2019; Cuturi et al., 2020; He et al., 2022; Chattopadhyay et al., 2022). Given the natural intuition behind InfoMax, one might ask how efficient this greedy heuristic is in practice. However, despite its popularity, theoretical guarantees about the performance of the InfoMax algorithm are scarce (Chen et al., 2015). In this paper, we analyze the InfoMax algorithm for binary tests and derive bounds on its performance. Throughout this paper, by performance we mean the expected number of tests needed to make accurate predictions. If one has access to all possible binary functions of $Y$ as tests, then it is --- 1For example, one possible question could be “Is $Y$ still alive?” known that the performance of the greedy strategy is upper bounded by $H(Y) + 1$ (Garey & Graham [1974]), where $H(Y)$ denotes the entropy of $Y$. This is nearly-optimal since $H(Y)$ is a lower bound on the best possible performance (Shannon [1948]). Unfortunately, for scenarios when one has access to only a restricted set of functions of $Y$, Loveland [1985] illustrated that it is possible to construct binary active testing problems for which given a set of tests, $\mathcal{T}$, the greedy strategy requires at least $\frac{|Y|}{16} \times \text{opt}(\mathcal{T}, Y)$ number of tests to identify $Y$, where $|Y|$ is the number of values $Y$ can take and $\text{opt}(\mathcal{T}, Y)$ is the performance of the optimal (not necessarily greedy) strategy for identifying $Y$ given $\mathcal{T}$. Thus, as $|Y|$ gets large, the greedy strategy can obtain dismal results when compared with the optimal strategy. In light of this result, how is it that the greedy strategy is one of the most popular heuristics used for solving the sequential selection of tests in practical applications? In this paper, we argue that the competitive performance of the greedy strategy that is often observed in practice can be attributed to a property of the set of available tests $\mathcal{T}$ that we call $\delta$-unpredictability. A set of tests is $\delta$-unpredictable, if in every iteration of the greedy strategy the selected test has conditional probability of being ‘true’, given history of test outcomes observed so far, within $\frac{1}{2} \pm \delta$, unless the posterior over $Y$ given the history of outcomes observed so far is sufficiently peaked, upon which the algorithm will terminate. While taking $\delta = \frac{1}{2}$ makes any given $\mathcal{T}$ trivially $\delta$-unpredictable, we observe that in many practical applications the given set of tests is $\delta$-unpredictable for modest values of $0 < \delta < \frac{1}{2}$. For example, in Figure 1, we show on two machine learning datasets, namely CUB-200 (Wah et al. [2011]) and AwA2 (Xian et al. [2018]), on which carrying out information maximization always finds a test within $\delta$ units of one-half in every iteration for every datapoint with $\delta = 0.22$ and $\delta = 0.17$, respectively. More details in Appendix §A.3. Similarly, Geman et al. [2015] employed $\delta$-unpredictable $\mathcal{T}$ for visual scene annotation (in terms of objects in the scene, their attributes and relationships) and showed $\delta = 0.15$ works. Inspired by these observations, we study the performance of the greedy strategy when $\mathcal{T}$ is $\delta$-unpredictable for some $\delta \in [0, \frac{1}{2}]$. In the extreme case where $\delta = 0$, we have bisecting tests at each iteration. If we further assume the tests are functions of $Y$, then the set of possible values $Y$ can take, referred to as the active set, is effectively halved at each iteration depending on the test outcome. This is akin to binary search, which is known to converge in $H(Y)$ iterations (Flores & Madpis [1971]). On the other extreme, when $\delta = \frac{1}{2}$, it allows the greedy strategy to pick a test that is deterministic given the history, in other words, tests with conditional probability of being true equal to 0/1 will be selected which would lead to null reduction of the current active set. Our contribution, is to study what happens in the middle, say when $\delta \approx 0.25$. We first study the simpler case of oracle tests, that is, when all tests in $\mathcal{T}$ are functions of $Y$ and bound the performance of the greedy strategy to be at most $\frac{H(Y)}{\log_2(\frac{1}{2} + \delta)}$, which immediately improves upon bounds previously reported in literature (Garey & Graham [1974], Loveland [1985], Dasgupta [2004], Kosaraju et al. [1999]). Building on this, we extend our analysis and present our main result on the algorithm’s performance under noisy tests. In particular, we assume the test outcomes are corrupted by a binary symmetric channel and we obtain bounds on the prediction error rate after the algorithm terminates. The analysis in the noisy case is more involved since the test outcomes, by virtue of noise, no longer constrain the set of possible values $Y$ can take. In summary, our main contributions are the following. • We first study the oracle case where tests are functions of $Y$. Assuming the given set of tests, $\mathcal{T}$, is $\delta$-unpredictable for some $\delta \in [0, \frac{1}{2}]$, we prove that the greedy strategy needs at most $\frac{H(Y)}{\log_2(\frac{1}{2} + \delta)}$ number of tests on average to identify (predict) $Y$. To the best of our knowledge, this is a first bound on the performance of the greedy strategy that explicitly depends on the entropy of $Y$. This ![Figure 1](image-url) is desirable since a lower bound on the average number of tests needed for any given \( T \) is given by the entropy of \( Y \) (Shannon [1948]). Moreover, we show that our bound is tighter than previously known bounds for oracle tests in practically relevant settings. • We then extend our analysis to the noisy case where we assume that test outcomes are corrupted via a binary symmetric channel. We obtain an upper bound on the performance of the greedy strategy that explicitly depends on \( \delta \) and the noise level. Specifically, our bound in this case is again within a constant factor of the entropy of \( Y \) modulo an additional term, where the constant factor and the additional term depend on \( \delta \) and the noise level. To the best of our knowledge, this is the first such result for the greedy strategy given noisy tests. 2 RELATED WORK Information Maximization (InfoMax) is a popular heuristic for sequentially selecting tests to make accurate predictions, which has been widely adopted across various fields under different names. One of the first proposals of this algorithm was in the context of optimal experimental design by Lindley [1956] where tests correspond to experiments one can carry out to gather information about \( Y \). Consequently, this algorithm has been proposed under various names such as Probabilistic Bisection Method, (Horstein [1963]), Splitting Algorithm (Garey & Graham [1974]), Entropy Testing (Geman & Jedynak [1996]), Information Gain (for decision tree induction) (Breiman et al. [1984]), Generalized Binary Search (Dasgupta [2004]), and Information Pursuit (Jahangiri et al. [2017]). Inspired by its empirical success, there is a fifty year lineage of scattered work on the performance of this “greedy” strategy. We begin by reviewing works studying the oracle case, where tests are functions of \( Y \), and conclude by mentioning recent efforts towards analyzing the more general case where test outcomes are corrupted by noise. Oracle tests. Shannon [1948] showed that when \( T \) is complete (that is, we have a test for every function of \( Y \)), the greedy strategy requires at most one test more than the optimal strategy on average. This result was extended by Sandelius [1961] who showed that greedy is in fact optimal when \( Y \) is uniformly distributed. Usually, for practical applications, \( T \) will almost always be incomplete. For example, in the popular “twenty question” parlor game involving famous people, we cannot test if \( Y \) is in every possible subset of famous people using questions about presence or absence of single human attributes like “writer”, “female”, “living”, “French”, etc. Subsequently Kosaraju et al. [1999] and Dasgupta [2004] proved that in the case of incomplete tests, the greedy strategy would require at most \( O\left(\frac{1}{\min_{y \in Y} P(Y = y)} \times \text{opt}(T, Y)\right) \) number of queries on average. Here \( \text{opt}(T, Y) \) is, as defined in the Introduction, the performance of the optimal strategy for identifying \( Y \). This generic bound is often vacuous (too loose) in practice as we also show empirically in the appendix (see §A.5). The idea of assuming the existence of \( \delta \)-unpredictable tests in each iteration of the greedy strategy was considered in earlier work (Garey & Graham [1974], Loveland [1985]). However, their analysis technique is significantly different from ours and results in an upper bound of \( \log_2 |Y| \left( \frac{1}{\delta} - \frac{1}{2\delta} \right) + \frac{1+2\delta}{1-2\delta} \), which is typically larger (i.e., looser) than ours. See §4.2 for an extended discussion comparing these bounds with our proposed bound. Noisy tests. This refers to the situation where the tests \( T \) are not determined by \( Y \), that is, the entropy \( H(T \mid Y) \) is positive. Unlike the oracle case, the performance of the greedy strategy in this case is sparsely explored. It is known that InfoMax is optimal in the restricted case where \( Y \in \mathbb{R} \) and \( T \) is a set of noisy indicator functions for all possible finite unions of intervals along the real line (Jedynak et al. [2012]). More general results are obtained by reducing the noisy case to the oracle case. For instance, Nowak [2008] assumed that the tests are “repeatable”, that is, any given test can be independently replicated any number of times to obtain the true outcome (de-noise) with high probability. Thus, by repeating the same test multiple times, its outcome can be made deterministic given \( Y \) (with high confidence) and the results discussed for the oracle case apply with an additional cost for repeating the test. However, this is not very realistic since in practice we rarely have access to “repeatable” tests. Golovin et al. [2010] analyzed greedy active learning algorithms in the presence of noise by considering the tests to be functions of \( Y \) and some noise variable \( \eta \) with known joint distribution \( P(Y, \eta) \), and thereafter applied the bounds known from the oracle case. Finally, Chen et al. [2015] explored the near-optimality of information maximization for the more practical scenario where noise is persistent, that is, tests are not “repeatable”. Compared to our work, Chen et al. (2015) studies the setting “What is the maximum amount of mutual information one can obtain about $Y$ by carrying out $k$ tests following the greedy strategy?”, whereas we are interested in bounding the mean number of tests required to achieve a desired level of accuracy. 3 Problem Setting and Preliminaries As is common convention, we will use capital letters for random variables and lowercase letters for their realizations. We will use the symbol $\mathbb{P}(\mathcal{E})$ to denote the probability of event $\mathcal{E}$. Moreover, we will often refer to the Information Maximization (InfoMax) algorithm simply as the greedy strategy. Information maximization. InfoMax (Geman & Jedynak 1996) is a greedy strategy for selecting tests sequentially in order of information gain. More formally, let $Y$ be a discrete random variable taking values in $\mathcal{Y}$ and let $\mathcal{T}$ be a given finite set of available tests, whose outcomes are informative about the value of $Y$. All random variables ($\mathcal{T}$ and $Y$) are defined on a common sample space $\Omega$. Given this setup, for any collection of tests (binary, noisy or otherwise), the InfoMax algorithm proceeds iteratively as follows: $$T_1 = \arg\max_{T \in \mathcal{T}} I(T; Y); \quad T_{k+1} = \arg\max_{T \in \mathcal{T}} I(T; Y | A(t_{1:k})).$$ (1) Here $T_{k+1} \in \mathcal{T}$ refers to the new test selected by InfoMax at step $k + 1$, based on the history of outcomes to previously asked tests (denoted as $t_{1:k}$), and $t_{k+1} \in \{0, 1\}$ indicates the corresponding outcome of the test asked in iteration $k + 1$. The conditioning event $A(t_{1:k})$ is defined as the event $\{\omega \in \Omega : T_i(\omega) = t_i : i \in \{1, 2, \ldots, k\}\}$, where $t_i$ is the observed outcome for carrying out test $T_i$. We refer to these events as active sets. We will use the concept of active sets in our analysis of InfoMax. The algorithm terminates after $L$ iterations if either $\max_{y \in \mathcal{Y}} \mathbb{P}(Y = y | A(t_{1:L})) > \gamma$ (a hyper-parameter that can be interpreted as desired accuracy level) or after all tests have been carried out. Refer to Figure 5 in the appendix for a flowchart diagram illustrating the InfoMax algorithm. Having described the InfoMax algorithm, we next define $(\delta, \gamma)$-unpredictable set of tests which encapsulates our assumption of existence of approximately bisecting sets as discussed in the Introduction. Unpredictable set of tests. As motivated in the Introduction, there exists scenarios when the greedy strategy can perform poorly compared to the optimal strategy. This calls for some assumptions on $\mathcal{T}$ to ensure the good performance of the greedy strategy that is often observed in practical sequential testing problems. In this work, we assume that at each iteration of the greedy algorithm there exists a test that $\delta$-approximately bisects the current active set. Formally, Definition 1. [(δ, γ)-unpredictable set of tests] A set of tests $\mathcal{T}$ is said to be $(\delta, \gamma)$-unpredictable if at any iteration $k + 1$ of InfoMax (assuming there remain tests in $\mathcal{T}$ that have not yet been carried out), either • The probability of the mode of the posterior is greater than or equal to $\gamma$, i.e., $\max_y \mathbb{P}(Y | A(t_{1:k})) \geq \gamma$; or • There exists a test $T_{k+1} \in \mathcal{T}$ such that, $$|\mathbb{P}(T_{k+1} = 1 | A(t_{1:k})) - \frac{1}{2}| \leq \delta,$$ where $t_{1:k}$ denotes the history of test outcomes after $k$ iterations. The $\gamma$ parameter is user-defined and controls the termination criteria for the greedy strategy. In the extreme case where we require $Y$ to be identifiable, $\gamma = 1$. For simplicity, in such scenarios, we will drop $\gamma$ from the notation and refer to such sets as $\delta$-unpredictable set of tests, implicitly meaning that the algorithm terminates only when $Y$ is identified or all tests in $\mathcal{T}$ have been carried out. We will further discuss the motivation for this definition and how it helps us bound the performance of the greedy strategy in the subsequent sections. 2The word unpredictable comes from the fact that if a test $T' \in \mathcal{T}$ at iteration $k$ exactly bisects the current active set, then, one cannot predict the outcome of $T'$ based on the history of test outcomes observed up till the first $k - 1$ iterations better than a random (unbiased) coin flip. 4 PERFORMANCE BOUNDS FOR ORACLE TESTS In this section, we analyze the performance of InfoMax when all tests in \( T \) are functions of \( Y \), hence the name oracle tests. Throughout this section, we will denote the outcome of test \( T \) as \( T(Y) \) to explicitly remind the reader that \( T \) is a function of \( Y \). Effectively the sample space \( \Omega \) (as defined in §3) can be taken to be \( Y \). Since the tests are not noisy, it is reasonable to expect that they collectively determine \( Y \), that is, the value of \( Y \) is uniquely determined if we observe \( \{T(Y), \forall T \in T\} \). As a result we will drop \( \gamma \) from the notation and only refer to \( T \) as being a \( \delta \)-unpredictable set of tests. 4.1 A NEW BOUND ON THE PERFORMANCE OF THE GREEDY INFORMATION MAXIMIZATION ALGORITHM Relationship with entropy maximization. In the oracle case, where \( Y \) determines the test outcomes (i.e., the outcome of any test is a function of \( Y \), \( t = T(Y), \forall T \in T \)), the InfoMax algorithm as described in equation [1] is equivalent to sequentially finding the test \( T \) that achieves the maximum conditional entropy given history. Equivalently, \[ T_1 = \arg \max_{T \in T} H(T); \quad T_{k+1} = \arg \max_{T \in T} H(T | A(t_{1:k})). \] (3) The equivalence of equation [3] and equation [1] can be seen by noticing that \( H(T | Y, A(t_{1:k})) = 0 \) when all tests are functions of \( Y \) ([Cover], 1999). Note that the active set in this case is now simply a subset of \( Y \), that is, \( A(t_{1:k}) = \{y \in Y : T_i(y) = t_i : i \in \{1, 2, ..., k\}\} \). Motivation for assuming \( T \) is \( \delta \)-unpredictable. The motivation for assuming a given \( T \) is \( \delta \)-unpredictable is as follows. The entropy of a binary random variable is maximized when its success probability is \( p = \frac{1}{2} \). Equation [3] can be reinterpreted as sequentially selecting tests from \( T \) that have success probability closest to \( \frac{1}{2} \) given the history of test outcomes observed so far. Specifically, \[ T_1 = \arg \min_{T \in T} |\mathbb{P}(T(Y) = 1) - \frac{1}{2}|; \quad T_{k+1} = \arg \min_{T \in T} |\mathbb{P}(T(Y) = 1 | A(t_{1:k}) - \frac{1}{2}|. \] (4) While it will generally not be possible to find a perfectly bisecting test, it is reasonable to assume that there exists some \( \delta \), such that at any iteration, a test can be found in \( T \) whose success probability, conditioned on the history of test outcomes observed so far, is within \( \frac{1}{2} \pm \delta \), as motivated in §1. Bounding the performance of the greedy strategy. If \( T \) is \( \delta \)-unpredictable for very small \( \delta \) we can intuitively expect the number of queries needed on average to identify \( Y \) to be roughly of the order of \( H(Y) \) (since we have almost bisecting tests). On the other hand, for large \( \delta \) (close to \( \frac{1}{2} \)), any given set of tests would be \( \delta \)-unpredictable (according to definition [1]) and we would expect the number of queries needed on average to blow up. The following theorem captures this intuition and provides a bound on the expected number of tests needed by the greedy strategy as a function of both \( \delta \) and the entropy of \( Y \). **Theorem 1.** Fix any \( \delta \in [0, \frac{1}{2}] \). Given a \( \delta \)-unpredictable \( T \), the average number of tests needed by the information maximization algorithm to identify \( Y \) is at most \[ B_{Ours} := \frac{H(Y)}{-\log_2(\frac{1}{2} + \delta)}. \] (5) **Proof.** (Sketch only; see Appendix §A.1.1 for a complete proof) Our result is based on the insight that if at any iteration \( k \), the greedy strategy picks a test \( T_k \) that satisfies equation [2], then at least \( \frac{1}{2} - \delta \) of the probability mass of the active set \( A(t_{1:k-1}) \) would be discarded depending on the outcome \( T_k(Y) \). Applying this argument recursively, the probability mass of the active set after \( k \) iterations, \( \mathbb{P}(A(t_{1:k})) \), is at most \( (\frac{1}{2} + \delta)^k \). As a result, we can conclude that if \( Y = y \) is still in the active set after iteration \( k \) then it must be that \( \mathbb{P}(Y = y) \leq (\frac{1}{2} + \delta)^k \). This result gives a bound on the number of tests needed to identify state \( y \) which is then used to bound the average number of tests. \( \square \) To highlight the importance of this result, recall from coding theory that given any set of tests, the optimal strategy cannot be better than \( H(Y) \), which thus serves as a lower bound for the greedy strategy given any \( T \). To the best of our knowledge, our result is the first one to upper bound the Figure 2: Comparing our bound $B_{\text{Ours}}$ with $B_{\text{Lov}}$ for different values of $|\mathcal{Y}|$ and $\delta$. When $|\mathcal{Y}|$ (the number of discrete values $Y$ can take) is small ($|\mathcal{Y}| = 4$ here), our bound is uniformly a tighter upper bound than the Loveland bound. As $|\mathcal{Y}|$ increases for larger $\delta$ values, $B_{\text{Lov}}$ gets tighter. Asymptotically ($|\mathcal{Y}| \approx 10^6$ here), we see that our bound is tighter for small values of $\delta < 0.2$. performance of the greedy strategy to be at most a multiplicative factor of the entropy of $Y$. This multiplicative factor degrades inverse logarithmically with $\delta$ and so even for the modest value of $\delta \approx 0.2$, which is far from a bisecting split, our result guarantees that the average number of tests under the greedy strategy is at most roughly twice the entropy of $Y$. 4.2 Comparison with previous bounds Having described our bound, we compare it with bounds previously reported in literature. The assumption of a $\delta$-unpredictable $T$ was previously considered by Garey & Graham (1974) for the case where $Y$ is uniformly distributed, and subsequently by Loveland (1985) for any distribution on $Y$. Both papers get the same bound and so we compare with the bound in Loveland (1985), which we will refer to as the $B_{\text{Lov}}$. Their analysis technique is significantly different from ours and as a result they obtain a very different upper bound on the average number of queries needed to identify $Y$, $$B_{\text{Lov}} := \frac{\log_2 |\mathcal{Y}|}{-(\frac{1}{2} - \delta) \log_2 (\frac{1}{2} - \delta)} + \frac{1 + 2\delta}{1 - 2\delta},$$ where $|\mathcal{Y}|$ is the number of discrete values $Y$ can take. Comparing the bound in equation (6) with our bound in equation (5) we make the following observations. • When $Y$ is uniform, we can compare the two bounds more easily since $H(Y) = \log_2 |\mathcal{Y}|$. As illustrated in figure 2, when $|\mathcal{Y}|$ is small our bound is uniformly tighter than $B_{\text{Lov}}$ for all values of $\delta \in [0, \frac{1}{2}]$. As $|\mathcal{Y}|$ increases, $B_{\text{Lov}}$ gets tighter for larger values of $\delta$. Asymptotically, when $\delta \to \infty$, our bound is tighter whenever $\delta \leq 0.1963 \approx 0.2$. This is a favorable result since we are most interested in the regime where $\delta$ is moderate (0.15-0.2), because otherwise the greedy strategy can degrade significantly compared to the optimal strategy. Indeed, note that both bounds diverge to $\infty$ for larger values of $\delta$. • When $Y$ is uniform and $\delta \to 0$, that is when we have access to exactly bisecting splits of the current active set, our bound recovers the entropy bound (Shannon 1948) of $\log_2 |\mathcal{Y}|$, whereas $B_{\text{Lov}}$ converges to $2 \log_2 |\mathcal{Y}| + 1$, indicating a clear gap. • When the distribution for $Y$ is not uniform, we expect our bound to be tighter since it depends on $H(Y)$ instead of $\log_2 |\mathcal{Y}|$. More specifically, for large $|\mathcal{Y}|$, the value of $\delta$ at which the two bounds are equal increases from $\approx 0.2$ (which was the point at which both the bounds were equal in the uniform case, refer Figure 2 for the case $|\mathcal{Y}| = 10^6$) as the level of non-uniformity of $Y$ increases. This is illustrated in Figure 3. Next we compare $B_{\text{Ours}}$ with the bounds derived by Dasgupta (2004) and Kosaraju et al. (1999), which make no assumption on $T$. Since both papers have the same bound in big $O$ notation, we compare with Dasgupta’s bound because it specifies the constants explicitly. Let $\text{opt}(T, Y)$ be the expected number of queries needed by the optimal strategy to identify $Y$ given a set of tests $T$. Dasgupta’s bound is given by: $$B_{\text{Das}} := 4 \ln \left( \frac{1}{\min_Y p(Y)} \right) \times \text{opt}(T, Y).$$ Figure 3: Comparing our bound $B_{\text{Ours}}$ with $B_{\text{LOV}}$ for different values of $H(Y)$ and $\delta$, with $|Y| = 10^6$. If $Y$ was uniform $H(Y)$ in this case would be $\log_2(10^6) = 19.93$ bits. We see that as the entropy decreases, the value of $\delta$ at which the two bounds are equal increases from $\approx 0.2$ for the uniform distribution (Figure 2, col 3) to about 0.4 for when the distribution over $Y$ has $H(Y) = 9.96$ bits. Notice that unlike the previous bounds, $B_{\text{Das}}$ depends on $\text{opt}(T, Y)$. This is because, in absence of any assumption on $T$ (like the $\delta$-uncertainty assumption we make), it only makes sense to analyze the performance of the greedy strategy relative to that of the optimal strategy. Otherwise, one can always choose some inefficient $T$ to make the greedy strategy perform arbitrarily bad. For example, take $T$ to contain only singleton tests of the form “Is $Y = y$?” Using the fact that $\text{opt}(T, Y) \geq H(Y)$ [Shannon, 1948] and $\ln \left(\frac{1}{\min_Y p(Y)}\right) \geq \ln |Y|$, we can show that for $\delta \leq 2^{-\frac{1}{4 \ln |Y|}} - \frac{1}{2}$, our bound is guaranteed to be tighter than $B_{\text{Das}}$. For details see Appendix §A.4. This is expected since $B_{\text{Das}}$ makes no distributional assumptions about $T$. We explicitly evaluate $2^{-\frac{1}{4 \ln |Y|}} - \frac{1}{2}$ for values of $|Y| \in [10, 100]$ and show in Figure 4 that our bound is tighter for extremely modest values of $\delta \leq 0.43$ (recall $\delta \in [0, 0.5]$). We demonstrate on two machine learning datasets (CUB-200 [Wah et al., 2011] and AwA2 [Xian et al., 2018]) that the given set of tests $T$ is $\delta$-unpredictable for modest values of $\delta$ (0.22 and 0.17 respectively) and subsequently show that our bound is closer to the true mean number of tests the greedy strategy requires on these datasets to identify $Y$, than the other discussed bounds. These results can be found in Appendix §A.5. 5 PERFORMANCE BOUNDS FOR NOISY TESTS Here, we analyze the performance of the greedy strategy when all tests in $T$ are noisy, that is $\forall T \in T$, the conditional entropy $H(T | Y) > 0$. As discussed in §2, the performance of the greedy strategy under noise is poorly understood. Unlike prior work [Nowak, 2008], our analysis does not assume that tests can be repeated any number of times to average the noise out. This is because in many applications the same test cannot be repeated again or will give the same outcome [Chen et al., 2015]. Instead we consider an explicit noise model for the tests and analyze the performance of the greedy strategy for that model. --- Note, while we do not assume the same test can be repeated, there can be multiple tests in $T$ that are (conditionally) statistically identical. For example, in the famous 20Q game let $y_1 =$ “Queen Victoria” and $y_2 =$ “Charles Darwin” be the only two states with non-trivial mass. Then, both tests “Is $Y$ female?” and “Is $Y$ a queen?” have statistically identical outcomes but are different tests. **Binary Symmetric Channel (BSC) Noise Model.** We first study the case where test outcomes are corrupted by a BSC, which is perhaps the most well-studied and simplest model for understanding the effects of noise in communication channels (Shannon [1948]). We make the following assumptions. - For every \( T \in \mathcal{T} \) there exists random variables \( D_T(Y) \), which is a function of \( Y \), and \( N_T \) such that \( T = D_T(Y) \oplus N_T \). The symbol \( \oplus \) denotes the Exclusive OR (XOR) operation. \( D_T(Y) \) can be understood as the true outcome for test \( T \) if there was no noise. \( N_T \) is the noise variable that corrupts the test outcome. - For every \( T \in \mathcal{T} \), we assume \( N_T \) is independent of \( Y \) with prior probability \( P(N_T = 1) = \alpha \) for some \( \alpha \in [0, \frac{1}{2}] \). Moreover, we assume all the noise variables, \( \{N_T : T \in \mathcal{T}\} \), are independent and hence the noise variables are i.i.d. We now describe our analysis of how the greedy strategy performs under this noise model. ### 5.1 A bound of the performance of the greedy strategy for noisy tests In general, when noise is present in the test outcomes, InfoMax (equation 1) is not equivalent to entropy maximization (equation 3). As a result we cannot interpret the greedy strategy as selecting the test at each iteration whose success probability given the history of test outcomes observed so far is close to \( \frac{1}{2} \). However, as we show in Lemma 2, under our noise model, we can interpret the greedy strategy as choosing the test \( \hat{T} \) in each iteration whose true outcome (\( D_{\hat{T}} \)) has success probability (given history) closest to a half. We now state our lemma which is inspired from Jedynak et al. (2012) where a similar result was derived for the case where \( Y = \mathbb{R} \) and the tests are unions of intervals along \( \mathbb{R} \). **Lemma 2.** Under the BSC noise model, at any iteration \( k + 1 \), the InfoMax algorithm will pick test \[ T_{k+1} = \arg \min_{T \in \mathcal{T}} |P(D_T = 1 \mid A(t_{1:k})) - \frac{1}{2}|, \] where \( A(t_{1:k}) \) is the active set after \( k \) iterations. The lemma is proved using standard information-theoretic identities coupled with the properties of our noise model. Refer Appendix §A.1.2 for a detailed proof. This result is in line with intuition since the noisy component of every test (\( N_T \)) is independent of \( Y \) and hence uninformative for prediction. Thus, the selection of the most information next test is governed solely by how well it’s true outcome approximately bisects the current active set \( A(t_{1:k}) \). A natural question to ask next is, *If a given set of tests \( \mathcal{T} \) is \((\delta, \gamma)\)-unpredictable then what can we conclude about the chosen test’s \( P(D_{T_{k+1}} = 1 \mid A(t_{1:k})) \)?* The following lemma answers this. **Lemma 3.** Under the BSC model with noise parameter \( \alpha \in [0, \frac{1}{2}] \), if \( \mathcal{T} \) is \((\delta, \gamma)\)-unpredictable according to definition 7, then in any iteration \( k + 1 \), the greedy strategy will either choose a test \( T_{k+1} \in \mathcal{T} \) such that \[ |P(D_{T_{k+1}} = 1 \mid A(t_{1:k})) - \frac{1}{2}| \leq \frac{\delta}{1 - 2\alpha}, \] or terminate according to \( \gamma \) stopping criterion. Moreover, given \( \alpha \), it is not possible to have a \((\delta, \gamma)\)-unpredictable \( \mathcal{T} \) for \( \delta > \frac{1}{2} - \alpha \). Refer to Appendix §A.2 for a proof. The above result has two consequences. 1. It shows that for a fixed \( \delta \), as the noise level \( \alpha \) increases from 0 to \( \frac{1}{2} \) (it’s maximum possible value) the ability of the true outcome \( D_T = D_T(Y) \) for any given test \( T \in \mathcal{T} \) to approximately bisect the current active set deteriorates by a factor of \( \frac{1}{1 - 2\alpha} \) compared to the observed test outcome \( T = t \). Based on this, one can conjecture that as the noise level increases, more and more tests would be needed to identify \( Y \), because the ability of the true outcomes to approximately bisect the current active set degrades. 2. It shows that the maximum possible value of \( \delta \) is bounded by the noise level \( \alpha \). In particular, by inverting the result in equation 8 (see Appendix §A.2) we see that if \( |P(D_{T_{k+1}} = 1 \mid A(t_{1:k})) - \[ \frac{1}{2} \leq \delta' \text{ for some constant } \delta' \in [0, \frac{1}{2}], \text{ then this implies } \delta = \delta'(1 - 2\alpha) \in [0, \frac{1}{2} - \alpha]. \] Thus, unlike the oracle case, it is not possible to have a set of noisy tests which is \((\delta, \gamma)\)-unpredictable for \( \delta > \frac{1}{2} - \alpha \). In hindsight, this result makes sense since according to our noise model, every test outcome is corrupted independently of all other tests and hence, there will always be some uncertainty in a certain test’s outcome regardless of how many tests have been carried out so far. Having stated all the ingredients we will now present our main result for the greedy strategy under the BSC noise model. **Theorem 4.** Fix noise level \( \alpha \in [0, \frac{1}{2}] \) for the BSC model. Fix \( \delta \in [0, \frac{1}{2} - \alpha] \). Given a \((\delta, \gamma)\)-unpredictable \( T \), the average number of tests needed by the InfoMax algorithm to predict \( Y \) with confidence at least \( \gamma \) under the BSC model is at most \[ B_{Ours}^{\text{Noisy}} := \frac{H(Y) - |\log_2 \gamma| + \alpha |T| \log_2 \frac{1-\alpha}{\alpha}}{\log_2 (1-\alpha) - \log_2 (\frac{1}{2} + \delta)} + 1. \tag{9} \] A complete proof can be found in Appendix §A.3. The main idea behind the proof of this theorem is similar to the proof sketch for Theorem 1 but requires a more careful tracking of how much probability mass of the active set is discarded in each iteration. As expected, the number of tests needed increases as the desired accuracy level \( \gamma \) is increased. Observe that in the absence of noise, that is, when \( \alpha = 0 \), and we set our desired accuracy to \( \gamma = 1 \), we recover our bound for the oracle case (Theorem 1). Our bound guarantees that as long as the noise level \( \alpha \) is low relative to the entropy of \( Y \), the performance of the greedy strategy is nearly-optimal (that is, within a constant factor of \( H(Y) \)). To the best of our knowledge, this is a first such result for the InfoMax algorithm for noisy tests. Notice that this bound also depends on \( |T| \) which has a detrimental effect on the bound when the noise level \( \alpha \) is high. This dependence on the size of the set of available tests is expected under the BSC model since there will always exist some sample points along which \( Y \) cannot be predicted with \( \gamma \) level of confidence. As a result, for those sample points, the greedy strategy would end up exhausting all \( |T| \) tests. In the extreme case, when \( \alpha = \frac{1}{2} \), none of the tests are informative about \( Y \), and hence \( Y \) can never be identified in less than \( |T| \) number of tests. ### 6 Conclusion & Limitations We analyzed the Information Maximization (InfoMax) algorithm and derived new upper bounds for the average number of tests needed to predict \( Y \). Our results are based on the observation that in most practical applications of InfoMax, one often has access to tests whose outcome partitions the current active set into two sets of sizes between \( \frac{1}{2} \pm \delta \) for \( 0 < \delta \ll \frac{1}{2} \). Using this assumption we obtained better bounds for the greedy strategy than previously established in the literature for the case of oracle tests, that is when the tests are functions of \( Y \). Subsequently, we extended our results to the case of noisy tests by assuming that the test outcomes are corrupted by a Binary Symmetric Channel and obtain bounds on the performance of the InfoMax algorithm. We now describe a few limitations of this work. Our analysis assumes tests are \( \delta \)-unpredictable for modest values of \( \delta \), however \( a \) priori it is not known how to find \( \delta \) such that the given set of tests would be \( \delta \)-unpredictable. Moreover, the BSC noise model assumes i.i.d. noise, however in practice noise is often dependent on the value of \( Y \), and test outcomes are often not independent of each other. We would address these limitations in future work by studying more complex noise models and designing testable conditions to verify if a given \( T \) is \( \delta \)-unpredictable for a given value of \( \delta \) or not. ### References Steve Branson, Grant Van Horn, Catherine Wah, Pietro Perona, and Serge Belongie. The ignorant led by the blind: A hybrid human–machine vision system for fine-grained categorization. *International Journal of Computer Vision*, 108:3–29, 2014. --- \(^4\) When \( \gamma = 1 \), one can get rid of the \(+1\) term in equation (9). See Appendix A.3. L Breiman, J Friedman, R Olshen, and C Stone. Cart. *Classification and regression trees*, 1984. Aditya Chattopadhyay, Stewart Slocum, Benjamin D Haeffele, René Vidal, and Donald Geman. Interpretable by design: Learning predictors by composing interpretable queries. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2022. Yuxin Chen, S Hamed Hassani, Amin Karbasi, and Andreas Krause. Sequential information maximization: When is greedy near-optimal? In *Conference on Learning Theory*, pp. 338–363. PMLR, 2015. Thomas M Cover. *Elements of information theory*. John Wiley & Sons, 1999. Marco Cuturi, Olivier Teboul, Quentin Berthet, Arnaud Doucet, and Jean-Philippe Vert. Noisy adaptive group testing using bayesian sequential experimental design. *arXiv preprint arXiv:2004.12508*, 2020. Sanjoy Dasgupta. Analysis of a greedy active learning strategy. *Advances in neural information processing systems*, 17, 2004. Ivan Flores and George Madpis. Average binary search length for dense ordered lists. *Communications of the ACM*, 14(9):602–603, 1971. Adam Foster, Martin Jankowiak, Elias Bingham, Paul Horsfall, Yee Whye Teh, Thomas Rainforth, and Noah Goodman. Variational bayesian optimal experimental design. *Advances in Neural Information Processing Systems*, 32, 2019. Michael R Garey and Ronald L. Graham. Performance bounds on the splitting algorithm for binary testing. *Acta Informatica*, 3(4):347–355, 1974. Donald Geman and Bruno Jedynak. An active testing model for tracking roads in satellite images. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 18(1):1–14, 1996. Donald Geman, Stuart Geman, Neil Hallonquist, and Laurent Younes. Visual turing test for computer vision systems. *Proceedings of the National Academy of Sciences*, 112(12):3618–3623, 2015. Daniel Golovin, Andreas Krause, and Debajyoti Ray. Near-optimal bayesian active learning with noisy observations. *Advances in Neural Information Processing Systems*, 23, 2010. Weijie He, Xiaohao Mao, Chao Ma, Yu Huang, José Miguel Hernández-Lobato, and Ting Chen. Bsoda: a bipartite scalable framework for online disease diagnosis. In *Proceedings of the ACM Web Conference 2022*, pp. 2511–2521, 2022. Michael Horstein. Sequential transmission using noiseless feedback. *IEEE Transactions on Information Theory*, 9(3):136–143, 1963. Laurent Hyafil and Ronald L Rivest. Constructing optimal binary decision trees is np-complete. *Information processing letters*, 5(1):15–17, 1976. Ehsan Jahangiri, Erdem Yoruk, Rene Vidal, Laurent Younes, and Donald Geman. Information pursuit: A bayesian framework for sequential scene parsing. *arXiv preprint arXiv:1701.02343*, 2017. Bruno Jedynak, Peter I Frazier, and Raphael Sznitman. Twenty questions with noise: Bayes optimal policies for entropy loss. *Journal of Applied Probability*, 49(1):114–136, 2012. Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mußmann, Emma Pierson, Been Kim, and Percy Liang. Concept bottleneck models. In *International Conference on Machine Learning*, pp. 5338–5348. PMLR, 2020. S Rao Kosaraju, Teresa M Przytycka, and Ryan Borgstrom. On an optimal split tree problem. In *Algorithms and Data Structures: 6th International Workshop, WADS’99 Vancouver, Canada, August 11–14, 1999 Proceedings*, pp. 157–168. Springer, 1999.
w8eCnnq57m
**Computational cost of optimization**: Unlike In-Context Learning, `LoraHub` does not need to process additional tokens hence a reduced inference cost. However, it also adds the cost to optimize the combination weights $w$ on the input few-shot samples, in particular when many upstream tasks are available. It would be interesting to discuss the trade-off between these two costs, e.g. say we have some few-shot samples but only want to solve the associated task once, it might be more practical to use in-context learning rather than the optimization pipeline of `LoraHub` ?
LORAHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition Anonymous authors Paper under double-blind review Abstract Low-rank adaptations (LoRA) are often employed to fine-tune large language models (LLMs) for new tasks. This paper investigates LoRA composability for cross-task generalization and introduces LoraHub, a simple framework devised for the purposive assembly of LoRA modules trained on diverse given tasks, with the objective of achieving adaptable performance on unseen tasks. With just a few examples from a new task, LoraHub can fluidly combine multiple LoRA modules, eliminating the need for human expertise and assumptions. Notably, the composition requires neither additional model parameters nor gradients. Empirical results on the Big-Bench Hard benchmark suggest that LoraHub, while not surpassing the performance of in-context learning, offers a notable performance-efficiency trade-off in few-shot scenarios by employing a significantly reduced number of tokens per example during inference. Notably, LoraHub establishes a better upper bound compared to in-context learning when paired with different demonstration examples, demonstrating its potential for future development. Our vision is to establish a platform for LoRA modules, empowering users to share their trained LoRA modules. This collaborative approach facilitates the seamless application of LoRA modules to novel tasks, contributing to an adaptive ecosystem. 1 Introduction Recent progress in natural language processing (NLP) has been largely fueled by large language models (LLMs) such as OpenAI GPT (Brown et al., 2020), Flan-T5 (Chung et al., 2022), and LLaMA (Touvron et al., 2023). These models demonstrate top-tier performance across different NLP tasks. However, their enormous parameter size presents issues regarding computational efficiency and memory usage during fine-tuning. To mitigate these challenges, Low-Rank Adaptation (LoRA) (Hu et al., 2022) has emerged as a parameter-efficient fine-tuning technique (Lester et al., 2021; He et al., 2022; An et al., 2022). By reducing memory demands and computational costs, it speeds up LLM training. LoRA achieves this by freezing the base model parameters (that is, an LLM) and training a lightweight module, which regularly delivers high performance on target tasks. While prior research has targeted the efficiency enhancement facilitated by LoRA, there is a dearth of investigation into the inherent modularity and composability of LoRA modules. Typically, previous methods train LoRA modules to specialize in individual tasks. Yet, the intrinsic modularity of LoRA modules presents an intriguing research question: Would it be possible to compose LoRA modules to generalize to novel tasks in an efficient manner? In this paper, we tap into the potential of LoRA modularity for broad task generalization, going beyond single-task training to meticulously compose LoRA modules for malleable performance on unknown tasks. Crucially, our method enables an automatic assembling of LoRA modules, eliminating dependency on manual design or human expertise. With just a handful of examples from new tasks (e.g., 5), our approach can autonomously compose compatible LoRA modules without human intrusion. We do not make assumptions about which LoRA modules trained on particular tasks can be combined, allowing for flexibility in amalgamating any modules as long as they conform to the specification (e.g., using the same LLM). As our approach leverages several available LoRA modules, we refer to it as LoraHub and denote our learning method as LoraHub learning. To validate the efficiency of our proposed methods, we test our approaches using the widely recognized BBH benchmark with Flan-T5 (Chung et al., 2022) serving as the base LLM. The results underline the effectiveness of the LoRA module composition for unfamiliar tasks through a few-shot LoraHub learning process. Notably, our methodology achieves an average performance that closely matches that of few-shot in-context learning, while demonstrating a superior upper bound, particularly when using different demonstration examples. Additionally, our method substantially reduces the inference cost compared to in-context learning, eliminating the requirement of examples as inputs for the LLM. With fewer tokens per example during inference, our method significantly reduces computational overhead and enables faster responses. It aligns with a broader research trend, where recent studies are actively exploring approaches to reduce the number of input tokens (Zhou et al., 2023; Ge et al., 2023; Chevalier et al., 2023; Jiang et al., 2023a; Li et al., 2023; Jiang et al., 2023b). Our learning procedure is also notable for its computational efficiency, using a gradient-free approach to obtain the coefficients of LoRA modules and requiring only a handful of inference steps for unseen tasks. For example, when applied to a new task in BBH, our methodology can deliver superior performance in less than a minute using a single A100 card. Importantly, LoraHub learning can feasibly be accomplished with a CPU-only machine, requiring proficiency solely for processing LLM inference. In our pursuit to democratize artificial intelligence, we are taking an important step forward by envisioning the establishment of the LoRA platform. The platform would serve as a marketplace where users can seamlessly share and access well-trained LoRA modules for diverse applications. LoRA providers have the flexibility to freely share or sell their modules on the platform without compromising data privacy. Users, equipped with CPU capability, can leverage trained LoRA modules contributed by others through automated distribution and composition algorithms. This platform not only cultivates a repository of reusable LoRA modules with a myriad of capabilities but also sets the stage for cooperative AI development. It empowers the community to collectively enrich the LLM’s capabilities through dynamic LoRA composition. 2 Problem Statement Large Language Models We assume that a large language model $M_\theta$ is based on Transformer architecture (Vaswani et al., 2017) and has been pre-trained on a large-scale text corpus. The model architecture can be either encoder-decoder (Raffel et al., 2020), or decoder-only (Brown et al., 2020). Also, $M_\theta$ could also have been fine-tuned with a large set of instruction-following datasets such as Flan Colleciton (Longpre et al., 2023) and PromptSource (Bach et al., 2022). Cross-Task Generalization In real-world situations, users often desire an LLM to perform novel tasks that it has not encountered before — an ability widely known as cross-task generalization. Generally, cross-task generalization falls into two categories: zero-shot learning (Mishra et al., 2022; Sanh et al., 2022; Chung et al., 2022; OpenAI, 2022; Lin et al., 2022), which necessitates no labeled examples of the new task, and few-shot learning (Ye et al., 2021; Min et al., 2022), which demands a handful of labeled examples. Assume we have $N$ distinct upstream tasks that the LLM has been trained on, denoted as $\mathbb{T} = \{T_1, ..., T_N\}$. Our paper primarily focuses on the latter category, where for an unseen target task $T' \notin \mathbb{T}$, users can only provide a limited set of labeled examples, $Q$. Our aim is to modify the model $M_\theta$ to adapt it to task $T'$ using only $Q$. An intuitive method would be to fine-tune the weights of $M_\theta$ based on $Q$, yielding an updated model $M_\phi$ with enhanced performance on $T'$. However, this approach is inefficient, time-consuming, and unstable when $Q$ is small. LoRA Tuning LoRA (Hu et al., 2022), a parameter-efficient fine-tuning method, facilitates the adaptation of LLMs using lightweight modules, eliminating the need for fine-tuning the entire weights. LoRA tuning involves keeping the original model weights frozen while introducing trainable low-rank decomposition matrices as adapter modules into each layer of the model. Compared to the base LLM, this module possesses significantly fewer trainable parameters, paving the way for rapid adaptation using minimal examples. As such, LoRA tuning presents a resource-efficient technique to quickly adapt LLMs for new tasks with restricted training data. However, traditional LoRA methods primarily concentrate on training and testing within the same tasks (Gema et al., 2023), rather than venturing into few-shot cross-task generalization. 3 METHODOLOGY In this section, we provide an overview of our proposed method. We then explain the LoRA tuning procedure in detail. Last, we introduce the procedure of our LoraHub learning, which consists of the COMPOSE stage and the ADAPT stage. 3.1 METHOD OVERVIEW As depicted in Figure 2, we initially train LoRA modules on a variety of upstream tasks. Specifically, for $N$ distinct upstream tasks, we separately train $N$ LoRA modules, each represented as $m_i$ for task $T_i \in T$. Subsequently, for a new task $T' \notin T$, such as Boolean Expressions represented in Figure 2, its examples $Q$ are utilized to steer the LoraHub learning process. The LoraHub learning encapsulates two main phases: the COMPOSE phase and the ADAPT phase. In the COMPOSE phase, all available LoRA modules are combined into a single integrated module $\hat{m}$, using $\{w_1, w_2, \ldots, w_N\}$ as coefficients. Each $w_i$ is a scalar value that can take on positive or negative values, and the combination can be done in different ways. During the ADAPT phase, the combined LoRA module $\hat{m}$ is amalgamated with the LLM $M_\theta$, and its performance on few-shot examples from the new task $T'$ is assessed. A gradient-free algorithm is subsequently deployed to update $w$, enhancing $\hat{m}$’s performance (e.g., loss) on the few-shot examples $Q$. Finally, after iterating through $K$ steps, the optimum performing LoRA module is applied to the LLM $M_\phi = \text{LoRA}(M_\theta, \hat{m})$. This serves as an effectively adjusted model for the unseen task $T'$, which will then be deployed and not updated anymore. 3.2 LoRA TUNING ON UPSTREAM TASKS LoRA effectively minimizes the number of trainable parameters through the process of decomposing the attention weight matrix update of the LLM, denoted as $W_0 \in \mathbb{R}^{d \times k}$, into low-rank matrices. In more specific terms, LoRA exhibits the updated weight matrix in the form $W_0 + \delta W = W_0 + AB$, where $A \in \mathbb{R}^{d \times r}$ and $B \in \mathbb{R}^{r \times k}$ are trainable low-rank matrices with rank $r$, a dimension significantly smaller than those of $d$ and $k$. In this context, the product $AB$ defines the LoRA module. m, as previously elaborated. By leveraging the low-rank decomposition, LoRA substantially reduces the number of trainable parameters needed to adapt the weights of LLMs during fine-tuning. 3.3 COMPOSE: ELEMENT-WISE COMPOSITION OF LoRA MODULES Within the COMPOSE stage, we implement an element-wise method to combine LoRA modules. This process integrates the corresponding parameters of the LoRA modules, requiring the modules being combined to have the same rank \( r \) to properly align the structures. Given that \( m_i = A_i B_i \), the combined LoRA module \( \hat{m} \) can be obtained by: \[ \hat{m} = (w_1 A_1 + w_2 A_2 + \cdots + w_N A_N)(w_1 B_1 + w_2 B_2 + \cdots + w_N B_N). \] (1) Notably, as we show in Sec. 5, combining too many LoRA modules at once can expand the search space exponentially, which may destabilize the LoraHub learning process and prevent optimal performance. To mitigate this, we employ random selection to prune the candidate space, and more advanced pre-filtering algorithms could be explored in the future. 3.4 ADAPT: WEIGHT OPTIMIZATION VIA GRADIENT-FREE METHODS During the ADAPT stage, our goal is to modify the coefficients \( w \) to boost the model’s performance on the examples from an unseen task. One might think of using gradient descent to optimize \( w \), following standard backpropagation methods. However, this approach demands constructing a hypernetwork for all LoRA modules, similar to differentiable architecture search methods (Zhang et al., 2019). Constructing these hypernetworks demands substantial GPU memory and time, posing a challenge. Given that \( w \) consists of a relatively small number of parameters, we opted for gradient-free methods for optimization instead of gradient descent. Inspired by previous work (Sun et al., 2022), we utilize a black-box optimization technique to find the optimal \( w \). The optimization process is steered by the cross-entropy loss, setting the goal to locate the best set \( \{w_1, w_2, \ldots, w_N\} \) that reduces the loss \( L \) on the few-shot examples \( Q \). Furthermore, we incorporate L1 regularization to penalize the sum of the absolute values of \( w \), helping to prevent obtaining extreme values. Consequently, the final objective of LoraHub is to minimize \( L + \alpha \cdot \sum_{i=1}^{N} |w_i| \), where \( \alpha \) serves as a hyperparameter. In terms of the gradient-free method, we leverage Shiwa, a combinatorial optimization approach (Liu et al., 2020). Shiwa offers a variety of algorithms and chooses the most suitable optimization algorithm for different circumstances. In most of the forthcoming experimental setups, we primarily employ the Covariance Matrix Adaptive Evolution Strategies (CMA-ES) (Hansen & Ostermeier, 1996). CMA-ES, as a stochastic and population-based optimization algorithm, offers versatility in addressing a broad spectrum of optimization challenges. It dynamically adjusts a search distribution, which is defined by a covariance matrix. During each iteration, CMA-ES systematically updates both the mean and covariance of this distribution to optimize the target function. In our application, we employ this algorithm to mold the search space for \( w \). Ultimately, we use it to identify the optimal \( w \) by evaluating their performance on the few-shot examples from an unseen task. 4 EXPERIMENTAL RESULTS In this section, we provide details on our main experiments. First, we give an overview of the experimental setup and implementation details. Next, we present our findings along with the results. 4.1 EXPERIMENTAL SETUP Large Language Model In our main experiments, we employ FLAN-T5 (Chung et al., 2022), particularly FLAN-T5-large, as the base LLM. The model has shown impressive abilities to perform zero-shot and few-shot learning. Candidate LoRA Modules Our methodology requires a compendium of LoRA modules trained on preceding tasks. For parity with FLAN, we adopt the tasks utilized to instruct FLAN-T5, thereby Table 1: Experimental results of zero-shot learning (Zero), few-shot in-context learning (ICL), IA3 fine-tuning (IA3), LoRA tuning (LoRA), full fine-tuning (FFT) and our proposed few-shot LoraHub learning (LoraHub) on the BBH benchmark with FLAN-T5-large as the base LLM. We denote algorithmic tasks with the superscript § following previous work \cite{wu2023}. Note that we employ three runs, each leveraging different 5-shot examples per task, as demonstrations for all few-shot methods. The average performance of all methods is reported below, and the best performance of each few-shot method can be found in the Appendix A. | Task | Zero | ICL_avg | IA3_avg | LoRA_avg | FFT_avg | LoraHub_avg | |-------------------------------------------|------|---------|---------|----------|---------|-------------| | Boolean Expressions | 54.0 | 59.6 | 56.2 | 56.0 | 62.2 | 55.5 | | Causal Judgement | 57.5 | 59.4 | 60.2 | 55.6 | 57.5 | 54.3 | | Date Understanding | 15.3 | 20.4 | 20.0 | 35.8 | 59.3 | 32.9 | | Disambiguation | 0.0 | 69.1 | 0.0 | 68.0 | 68.2 | 45.2 | | Dyck Languages | 1.3 | 0.9 | 4.2 | 22.2 | 19.5 | 1.0 | | Formal Fallacies | 51.3 | 55.3 | 51.5 | 53.6 | 54.0 | 52.8 | | Geometric Shapes | 6.7 | 19.6 | 14.7 | 24 | 31.1 | 7.4 | | Hyperbaton | 6.7 | 71.8 | 49.3 | 55.3 | 77.3 | 62.8 | | Logical Deduction§ | | | | | | | | (five objects) | 21.3 | 39.1 | 32.7 | 40.0 | 42.2 | 36.1 | | Logical Deduction§ | | | | | | | | (seven objects) | 12.7 | 40.7 | 33.8 | 37.3 | 44.9 | 36.8 | | Logical Deduction§ | | | | | | | | (three objects) | 0.0 | 51.6 | 8.5 | 53.6 | 52.9 | 45.7 | | Movie Recommendation | 62.7 | 55.8 | 61.8 | 51.5 | 66.0 | 55.3 | | Multistep Arithmetic | 0.7 | 0.7 | 0.7 | 0.2 | 0.0 | 0.4 | | Navigate | 47.3 | 45.3 | 46.2 | 48.0 | 48.0 | 47.1 | | Object Counting | 34.7 | 32.4 | 35.1 | 38.7 | 35.6 | 33.7 | | Penguins in a Table | 43.5 | 41.3 | 45.0 | 36.2 | 31.9 | 35.9 | | Reasoning about Colored Objects | 32.0 | 40.2 | 40.7 | 39.6 | 37.6 | 40.0 | | Ruin Names | 23.3 | 19.3 | 24.4 | 37.8 | 61.3 | 24.4 | | Salient Translation Error Detection | 37.3 | 47.3 | 37.1 | 16.0 | 16.2 | 36.0 | | Snarks | 50.0 | 54.2 | 53.9 | 55.6 | 66.7 | 56.9 | | Sports Understanding | 56.0 | 54.7 | 55.1 | 56.5 | 54.0 | 56.7 | | Temporal Sequences | 16.7 | 25.1 | 18.2 | 25.1 | 37.8 | 18.2 | | Tracking Shuffled Objects§ | | | | | | | | (five objects) | 12.0 | 12.0 | 12.0 | 13.8 | 16.9 | 12.3 | | Tracking Shuffled Objects§ | | | | | | | | (seven objects) | 6.7 | 6.7 | 6.7 | 10.0 | 9.8 | 7.7 | | Tracking Shuffled Objects§ | | | | | | | | (three objects) | 24.7 | 31.1 | 30.7 | 30.9 | 32.0 | 29.2 | | Web of Lies | 54.0 | 53.8 | 54.2 | 52.7 | 48.2 | 50.1 | | Word Sorting | 1.3 | 0.5 | 1.3 | 4.9 | 4.9 | 1.1 | Avg Performance Per Task: 27.0, 37.3, 31.6, 37.7, 42.1, 34.7 Avg Tokens Per Example: 111.6, 597.8, 111.6, 111.6, 111.6, 111.6 Gradient-based Training: No, No, Yes, Yes, Yes, No incorporating nearly 200 distinct tasks and their corresponding instructions [1]. Following this, we trained several LoRA modules as potential candidates. During each experimental sequence, we randomly select 20 LoRA modules from them as the candidate for our LoraHub learning. Dataset and evaluation Our method is evaluated using the Big-Bench Hard (BBH) benchmark, a well-established standard that consists of multiple-choice questions from a variety of domains. The benchmark consists of 27 different tasks, which are regarded to be challenging for language models. For all tasks, we employ the exact match (EM) as our evaluation metric. Baseline Setup To enhance the demonstration of our method’s performance, we expanded our comparisons beyond the zero-shot and in-context learning settings. We specifically chose three representative gradient-based methods for comparison: full fine-tuning (FFT), LoRA tuning (LoRA), [1] We accessed these publicly available tasks via huggingface.co/datasets/conceptofmind/FLAN_2022 and IA3 fine-tuning (IA3) (Liu et al., 2022). For all gradient-based methods, for a fair comparison, we train for 40 epochs on the same three runs of 5 examples employed in our methods. In the case of FFT, a learning rate of $3e-5$ is employed, whereas for IA3 and LoRA, we adopt a learning rate of $2e-4$. We report the performance of each method on the test set at the end of training (averaged over three runs) without any model selection to avoid potential selection bias. ### 4.2 Main results As shown in Table 1, our experimental results demonstrate the superior efficacy of our method in comparison to zero-shot learning while closely resembling the performance of in-context learning (ICL) in few-shot scenarios. This observation is derived from an average performance of three runs, each leveraging different few-shot examples. Importantly, our model utilizes an equivalent number of tokens as the zero-shot method, notably fewer than the count used by ICL. Although occasional performance fluctuations, our method consistently outperforms zero-shot learning in most tasks. In the era of LLMs, the input length is directly proportional to the inference cost, and thus LoraHub’s ability to economize on input tokens while approaching the peak performance grows increasingly significant. Moreover, as shown in Appendix Table 8, the upper bound performance of our method across these runs can surpass ICL on 18 tasks, demonstrating its potential for future development. Even when compared to certain gradient-based optimization methods, our approach consistently demonstrates competitive performance. For example, as depicted in Table 1, our method exhibits a notable improvement of 3.1% on average in contrast to the promising IA3 method. Nevertheless, we acknowledge that our approach still falls behind LoRA tuning and full fine-tuning, especially in tasks that exhibit significant deviation from the upstream task. Taking Dyck Languages as an example, both LoraHub and ICL achieve only an average performance of nearly 1.0% on these tasks, while LoRA and FFT methods showcase impressive results with only 5 examples. ### 4.3 Discussion LoraHub addresses the challenge of reducing inference costs by eliminating the need for processing additional tokens, resulting in a noticeable reduction in overall inference expenses. However, it introduces an inherent cost during the ADAPT stage, necessitating extra inference steps, such as the 40 steps employed in our experiments. This introduces a trade-off between choosing the ICL approach and LoraHub, with the decision typically hinging on the nature of the situation. For one-time ad-hoc tasks, the ICL approach should be more pragmatic due to LoraHub’s additional inference step costs. In such scenarios, where immediate, single-use solutions are preferred, the simplicity and efficiency of ICL might outweigh the benefits of potential savings offered by LoraHub. Conversely, for recurring or similar tasks, LoraHub emerges as a compelling option. Despite the added inference step cost, LoraHub’s ability to efficiently handle repetitive tasks, often occurring thousands of times, while concurrently reducing overall expenses, positions it as a viable option in such kind of situations. In summary, our intention is not to replace ICL, but to present LoraHub as a complementary strategy with performance-efficiency trade-offs. Thus, we encourage a careful consideration of specific use cases and requirements when choosing between ICL and LoraHub, recognizing that the optimal solution may vary based on the nature and frequency of the tasks at hand. ### 5 Experimental Analysis In this section, we thoroughly examine the characteristics of our proposed method and uncover several insightful findings. If not specified, we use FLAN-T5-large for all analysis. **Which LoRA modules are most effective for BBH tasks?** We hypothesized that the amalgamation of LoRA modules could incorporate skills and insights from a variety of specific tasks. To evaluate this, we examined the extent of influence a single LoRA module had amongst all tasks from the BBH benchmark. We measured the impact of each isolated task by calculating the average absolute weight. The top five modules, presented in Table 2, were found Table 2: The top five beneficial LoRA modules for BBH tasks and their associated upstream tasks, the average weight values and the average performance on all BBH tasks. | Rank | Dataset: Task | Weight | Perf | Task Description | |------|-------------------------------|--------|------|-------------------------------------------------------| | 1 | WIQA: Last Process | 0.72 | 28.1 | Identifying the last step of a given process. | | 2 | RACE: Is this the Right Answer| 0.68 | 30.8 | Determining if given answer is correct. | | 3 | WIQA: First Process | 0.63 | 28.1 | Identifying the first step of a given process. | | 4 | AdversarialQA: BiDAF | 0.61 | 25.1 | Answering question created by an adversarial model-in-the-loop. | | 5 | WebQuestions: What is the Answer | 0.58 | 27.0 | Answering question based on information extracted from the web. | to have substantial influence, as indicated by their maximum average weights, which suggested that they were notably more effective in cross-task transfer. Remarkably, a common feature among these top five modules was their association with tasks requiring reading comprehension and reasoning skills—attributes indicative of higher cognitive complexity. However, it is worth noting that none of the modules exhibited consistent improvement across all BBH tasks, as reflected in their average performance on all BBH tasks, which did not show a significant improvement compared to the original FLAN-T5-large, except for the Rank 2. The results underscore the advantages of composing diverse modules in LoraHub. How effective is the gradient-free optimization method? To assess the effectiveness of our gradient-free optimization method in correctly identifying the most suitable LoRA module for a given downstream task, we carried out an empirical study using the WikiTableQuestions (Pasupat & Liang [2015]) (WTQ) dataset. We strategically included a LoRA module that was specifically trained on the WTQ dataset into our pool of LoRA candidate modules, which originally stemmed from tasks exclusive to the Flan Collection. Subsequently, we designated WTQ as the targeted downstream task and computed the weights consistent with the methods employed in LoraHub learning. As an end result, the WTQ-specific LoRA module was awarded the highest weight, exemplifying the algorithm’s success in recognizing it as the most relevant. Moreover, the combined LoRA module demonstrated marginal superiority over the WTQ LoRA module. This underscores the claim that the gradient-free optimization method has the ability to proficiently select the optimal upstream LoRA module for an unseen task. Can LoraHub work well on non-instruction-tuning models? In previous investigations, we primarily focused on models with zero-shot capabilities that were trained with instruction tuning. However, for models like T5 without zero-shot abilities, where training has a larger effect on parameters, it was unclear if LoraHub could still effectively manage and improve them. Our experiments show that although these models perform worse than FLAN-T5, LoraHub learning can still enable them to effectively generalize to unseen tasks. See Appendix B for more details. Will the rank of LoRA modules impact the performance of LoraHub learning? The parameter rank plays a crucial role in the LoRA framework, directly influencing the number of trainable parameters utilized during LoRA tuning. This prompts an intriguing question: does the variation in rank values influence the outcomes observed within the LoraHub learning? Our analysis indicates that, for FLAN-T5, the choice of rank has minimal impact. However, for T5, it still exerts some influence. Empirical findings reveal that, in comparison to rank values of 4 or 64, a rank value of 16 consistently demonstrates superior performance across different runs, both in terms of average and optimal values. Additional results are available in Appendix B. Does more LoRA modules lead to better results? In our main experiments, we randomly selected 20 LoRA modules for LoraHub learning. Therefore, we conducted experiments to investigate the effect of using different numbers of LoRA modules. Table 3: The average performance of various methods across all tasks in the benchmark BBH. | Method | Performance | |-----------------|-------------| | LoRA Retrieval | 31.7 | | LoraHub_avg | 34.7 | | LoraHub_best | 41.2 | The results demonstrate that as we increased the number of LoRA modules, the variance in performance increased. However, the maximum achievable performance also improved. More analysis on the variance and the detailed results can be found in Appendix G. Does composing LoRA modules extend beyond the single module’s benefits? We acknowledge the investigation of cross-task performance in prior work (Jiang et al., 2023), which delved into the capabilities of LoRA and proposed a novel method centered around LoRA module retrieval. In order to ensure a fair comparison, we conducted an experiment where we designed a LoRA retrieval mechanism based on the loss derived from few-shot examples. Specifically, we ranked all LoRA module candidates according to this loss and evaluated the best candidate on the test set of the unseen task. As depicted in Table 3, the performance of LoRA retrieval is notably impressive, positioning it as a strong baseline. However, in comparison to LoraHub, the performance of LoRA retrieval is relatively less favorable. 6 RELATED WORK Model merging Our method substantially draws on the concept of LoRA module composition, and thus, aligns with the significant thread of research in model merging. This research focus is broadly categorized based on the ultimate objectives of model merging. The first category focuses on merging entire models, and the goal is to combine individually trained models to approximate the performance benefits of model ensembling or multi-task learning. Prior works such as Matena & Raffel (2021) and Jin et al. (2023) operated under the assumption of shared model architectures. Matena & Raffel (2021) amalgamates models by approximating Gaussian posterior distributions garnered from Fisher information, while Jin et al. (2023) merges models steered by weights that minimize the differences in prediction. Another approach is merging models with different architectures. For instance, Ainsworth et al. (2023) configures weights of different models prior to their merger. Following this objective, Stoica et al. (2023) merges models operating on varying tasks by identifying common features, without requiring additional training. Unlike these works, our work focuses on merging models to enable cross-task generalization. The second category most closely aligns with our research, stemming from a shared motivation of module composition. Various scholars have made advances in this line of research: Kingetsu et al. (2021) decomposes and recomposes modules on the basis of their functionality; Ilharco et al. (2022) proposes modulating model behavior using task vectors; Wang et al. (2022); Lv et al. (2023) amalgamates parameter-efficient modules weighted according to task similarity; Zhang et al. (2023) crafts modules by employing specific arithmetic operations; Sun et al. (2023) improves few-shot performance of unseen tasks by multi-task pre-training of prompts; Chronopoulou et al. (2023) averages adapter weights intended for transfer; Ponti et al. (2023) focuses on jointly learning adapters and a routing function that allocates skills to each task; and Muqeth et al. (2023) concentrates on amalgamating experts in mixture of experts models; However, these methods generally necessitate multi-task training or human prior on module selection for the downstream task. In contrast, our method does not impose any special training requirements and simply employs vanilla LoRA tuning. Additionally, the module selection for downstream tasks is entirely data-driven without human prior knowledge. This design gives the advantage of easily adding new LoRA modules for reuse, allowing our method to flexibly scale up the number of LoRA module candidates in the future. Mixture of experts The Mixture of Experts (MoE) is an ensemble method, often visualized as a collection of sub-modules, or “experts”, each specializing in processing different types of input data. Each expert in this system is controlled by a unique gating network, activated based on the distinct nature of the input data. For every token in these input sequences, this network identifies and engages the most suitable experts to process the data. As a result, the performance is superior compared to relying on a single, generic model for all types of input. This technique has proven instrumental in numerous domains, such as natural language processing and computer vision (Jacobs et al., 1991; Shazeer et al., 2017; Du et al., 2022; Zhang et al., 2022; Crumb, 2023). Our methodology displays similarities to MoE, wherein upstream-trained LoRA modules can be aligned with MoE’s expert design. A noteworthy distinguishing factor is that our approach mechanism does not require any specialized manipulation of LoRAs during training while facilitating dynamic LoRA module assembly at any scale, each pre-tuned to different tasks. In contrast, MoE mandates a predetermined count of experts during both the training and testing phases. Recent studies on the interrelation between MoE and instruction tuning have demonstrated that the simultaneous application of both approaches enhances the effectiveness of each individually (Shen et al., 2023). Cross-Task generalization Recent advancements like CrossFit (Ye et al., 2021), ExT5 (Aribandi et al., 2022), FLAN (Wei et al., 2022), T0 (Sanh et al., 2022), InstructGPT (Ouyang et al., 2022), and ReCross (Lin et al., 2022) have been striving to foster a vastly multi-task model’s generalization across different tasks, very much aligned with the objectives of our research. Among this cohort, the connections of CrossFit and ReCross with LoraHub are particularly noteworthy. The CrossFit framework (Ye et al., 2021) mandates a minimal number of labeled examples of the target task for few-shot fine-tuning. However, its limitation lies in the application of task names as hard prefixes in templates, posing challenges in the task’s generalization. On the other hand, while ReCross mitigates the need for labels in few-shot examples for retrieval, it necessitates a fine-tuning process using the retrieved data. This procedure appears time-consuming when compared to LoraHub’s approach. Through the deployment of few-shot labeled examples and a gradient-free optimization process, LoraHub facilitates an iterative update of weights to compose the LoRA modules. The resultant method is more efficient and cost-effective relative to previous work. Overall, LoraHub offers a more practical and viable solution to the optimization process. 7 LIMITATIONS & FUTURE WORK Pre-Filtering of LoRA Module Candidates While our method is successful in identifying and weighting relevant aspects from seen tasks to enhance unseen task performance, relying entirely on the model to perform this search can lead to increased computational demands and potentially unstable results. Incorporating a pre-filtering step to select only pertinent LoRA modules could expedite and refine performance. Identifying an effective selection strategy warrants further study. Method Applicability to Decoder-Only Models All experiments for this study were executed using the encoder-decoder architecture. We aspire to extrapolate this method to decoder-only models such as GPT (Brown et al., 2020), aiming to determine its applicability in such contexts. Exploring Superior Optimization Methods The use of a genetic algorithm for optimization in this study raises the question of whether better optimization approaches exist that could provide superior gradient-free optimization with limited examples. Although the current method has shown adequate performance, there is still room for improvement. 8 CONCLUSION In this work, we have introduced LoraHub, a strategic framework for composing LoRA modules trained on diverse tasks in order to achieve adaptable performance on new tasks. Our approach enables the fluid combination of multiple LoRA modules using just a few examples from a novel task, without requiring additional model parameters or human expertise. The empirical results on the BBH benchmark demonstrate that LoraHub can effectively match the performance of in-context learning in few-shot scenarios, removing the need for in-context examples during inference. Overall, our work shows the promise of strategic LoRA composability for rapidly adapting LLMs to diverse tasks. By fostering reuse and combination of LoRA modules, we can work towards more general and adaptable LLMs while minimizing training costs. REPRODUCIBILITY STATEMENT The authors have made great efforts to ensure the reproducibility of the empirical results reported in this paper. Firstly, the experiment settings, evaluation metrics, and datasets were described in detail in Section 4.1. Secondly, the codes and script for reproduce the result will be open-source after accepted. Second, the source code implementing the proposed method and experiments will be made publicly available at upon acceptance of the paper. Third, pre-trained LoRA modules from this work along with their configuration files and weights will be shared. These allow reproduction without retraining the LoRA modules, enabling quick testing and verification. REFERENCES Samuel Ainsworth, Jonathan Hayase, and Siddhartha Srinivasa. Git re-basin: Merging models modulo permutation symmetries. In The Eleventh International Conference on Learning Representations, 2023. Shengnan An, Yifei Li, Zeqi Lin, Qian Liu, Bei Chen, Qiang Fu, Weizhu Chen, Nanning Zheng, and Jian-Guang Lou. Input-tuning: Adapting unfamiliar inputs to frozen pretrained models. ArXiv preprint, 2022. Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q. Tran, Dara Bahri, Jianmo Ni, Jai Prakash Gupta, Kai Hui, Sebastian Ruder, and Donald Metzler. Ext5: Towards extreme multi-task scaling for transfer learning. In Proc. of ICLR, 2022. Stephen Bach, Victor Sanh, Zheng Xin Yong, Albert Webson, Colin Raffel, Nihal V. Nayak, Abheesh Sharma, Taewoon Kim, M Saiful Bari, Thibault Fevry, Zaid Alyafeai, Manan Dey, Andrea Santilli, Zhiqing Sun, Srulik Ben-david, Canwen Xu, Gunjan Chhablani, Han Wang, Jason Fries, Maged Al-shaibani, Shanya Sharma, Urmish Thakker, Khalid Almubarak, Xiangru Tang, Dragomir Radev, Mike Tian-jian Jiang, and Alexander Rush. PromptSource: An integrated development environment and repository for natural language prompts. In Proc. of ACL, 2022. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. Alexis Chevalier, Alexander Wettig, Anirudh Ajith, and Danqi Chen. Adapting language models to compress contexts. CoRR, abs/2305.14788, 2023. doi: 10.48550/ARXIV.2305.14788. URL https://doi.org/10.48550/arXiv.2305.14788. Alexandra Chronopoulou, Matthew Peters, Alexander Fraser, and Jesse Dodge. AdapterSoup: Weight averaging to improve generalization of pretrained language models. In Findings of the Association for Computational Linguistics: EACL 2023, 2023. Hyung Won Chung, Le Hou, S. Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksa Chowdhery, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Wei Yu, Vincent Zhao, Yanping Huang, Andrew M. Dai, Hongkun Yu, Slav Petrov, Ed Huai hsin Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. Scaling instruction-finetuned language models. ArXiv preprint, 2022. crumb. Llama-2, mixtute of lora. https://crumbly.medium.com/llama-2-molora-f5f909434711, 2023.
jU3zRzUBiD
Previous work, such as CryptoNAS and Sphynx, has already explored this concept by maintaining a constant ReLU count per layer, which in turn increases the FLOPs count (however, these studies did not provide detailed FLOPs count information)
COMPENSATING FOR NONLINEAR REDUCTION WITH LINEAR COMPUTATIONS IN PRIVATE INFERENCE Anonymous authors Paper under double-blind review ABSTRACT Increasingly serious data privacy concerns and strict regulations have recently posed significant challenges to machine learning, a field that hinges on high-performance processing of massive user data. Consequently, privacy-preserving machine learning (PPML) has emerged to securely execute machine learning tasks without violating privacy. Unfortunately, the computational cost to securely execute nonlinear computations in PPML models remains significant, calling for new neural architecture designs with fewer nonlinear operations. We propose Seesaw, a novel neural architecture search method tailored for PPML. Seesaw exploits a previously unexplored opportunity to leverage more linear computations and nonlinear result reuse, in order to compensate for the accuracy loss due to nonlinear reduction. It also incorporates specifically designed pruning and search strategies to efficiently handle the much larger design space including both nonlinear and linear operators. Compared to the previous state-of-the-art PPML for image classification on ImageNet, Seesaw achieves $1.68 \times$ less latency at 71% iso-accuracy, or 4.59% higher accuracy at iso-latency of 1000K ReLU operations. 1 INTRODUCTION Machine learning (ML) has become an indispensable and ubiquitous technology in contemporary data-driven applications, with deep neural networks achieving remarkable success in complex tasks such as image/video classification and natural language processing LeCun et al. (2015). The effectiveness of ML hinges on massive training data and extensive computational resources to efficiently process large network models. Consequently, ML tasks start to be outsourced to remote servers and deployed on cloud computing systems Aliyun; Amazon Web Services; Azure; Baidu; Cloud; OpenAI (2023). While cloud-based ML services bring a new revolution, this deployment model has also raised serious concerns regarding the privacy of user data, such as health/medical records, financial status, and location information, which must now be sent to public cloud platforms and suffer from leakage risks. In response to the growing privacy concerns associated with ML applications, privacy-preserving machine learning (PPML) solutions have been proposed to securely store and process users’ sensitive data without compromising confidentiality and integrity. State-of-the-art PPML frameworks heavily use cryptographic primitives, including homomorphic encryption and multi-party computation, to achieve provable security Badawi et al. (2018); Brutzkus et al. (2019); Chandran et al. (2022); Dowlin et al. (2016); Juvekar et al. (2018); Liu et al. (2017); Mishra et al. (2020); Ng & Chow (2021); Riazi et al. (2018); Zhang et al. (2023). However, despite extensive algorithm and system optimizations, their computational cost is still several orders of magnitude higher than the original plaintext models, resulting in unacceptably long execution latency that restricts their practical usage in time-sensitive scenarios like online inference. The high processing overheads are primarily associated with nonlinear operators (e.g., activation functions such as Sigmoid and ReLU), which involve complex secure multi-party computation protocols Yao (1986) with heavy cryptographic computations (e.g., AES encryption) and frequent communication between the user and the cloud. Great efforts have thus far been made to alleviate the nonlinear computational cost in PPML, such as developing more efficient protocols for nonlinear operators Ghodsi et al. (2021); Lou et al. (2021); Mishra et al. (2020), or reducing the number of such operations through pruning and neural architecture search (NAS) Cho et al. (2022a;b); Ghodsi et al. (2020); Huang et al. (2022); Jha et al. (2021); Kundu et al. (2023a;b). Nevertheless, almost all prior techniques simply started with an existing network architecture, and only focused on reducing the amount of nonlinear operators while struggling to minimize the corresponding negative accuracy impact. This approach inevitably causes increasing accuracy degradation when more nonlinear computations are reduced, suffering from the fundamental tradeoff between model accuracy and execution latency. Our contributions. In this work, we aim to break this tradeoff, by exploiting opportunities to use additional computations and data orchestration to compensate for accuracy loss due to nonlinear reduction. Specifically, we propose two approaches: (1) adding more linear operations to the model to recover its decreased representation capacity; (2) reusing the results of the remaining nonlinear operators as much as possible through introducing residual shortcut links to the model topology. Although adding such linear and aggregation computations would increase the execution latency in the insecure case, the overheads in the PPML scenario are negligible compared to the dominant nonlinear cost, therefore exhibiting a unique opportunity. We thus design Seesaw, a one-shot NAS method that leverages the above compensation ideas to automatically search for optimized neural network architectures for PPML. Besides the existing problem of determining how to selectively enable nonlinear operations under a given nonlinear budget, Seesaw needs to further deal with several new challenges. First, it must decide the amounts of extra linear computations and data reuse to add, in order to achieve a balance between sufficient representation capacity and overfitting avoidance. We propose novel pruning and NAS techniques to solve this issue. Second, it also needs an efficient search and training strategy, because the overall design space is significantly enlarged with the additional computations. We present a novel search strategy with a modified loss function. When evaluated on the CIFAR100 and ImageNet datasets under a wide range of nonlinear budgets, Seesaw is able to push the Pareto optimal frontier between the model accuracy and the execution latency. Compared to the previous state-of-the-art Kundu et al. (2023a), Seesaw achieves $1.68 \times$ latency reduction at iso-accuracy, or 4.59% accuracy at iso-latency. 2 BACKGROUND Privacy-preserving machine learning (PPML) aims to address the challenges of processing private user data on proprietary ML models, while not revealing any sensitive information to malicious participants during the computation. We focus on PPML inference. More specifically, privacy is protected if (1) the user learns no knowledge of the ML model except for the inference result of her own input data; and (2) the model owner gains no information about the user data. Currently, there are mainly two approaches to realize PPML. Hardware-based trusted execution environments (TEEs) can protect sensitive data Hunt et al. (2018); Hynes et al. (2018); Kim et al. (2020); Kunkel et al. (2019); Li et al. (2021); Tramer & Boneh (2019), but TEEs are vulnerable to side channels, weakening their security Chen et al. (2019); Wang et al. (2018). Cryptography-based PPML protects data privacy using modern cryptographic primitives Damgård et al. (2012); Gentry (2009); Yao (1986). They offer theoretically provable, strong security guarantees. Our work optimizes the execution latency of crypto-based PPML solutions while minimizing the accuracy impact. 2.1 CRYPTOGRAPHIC PRIMITIVES AND PPML PROTOCOL Existing PPML algorithms have used various cryptographic primitives to best match different computation patterns in ML applications. Fully Homomorphic Encryption (FHE) Gentry (2009) is a technique that allows for applying arbitrary functions composed of addition and multiplication on encrypted data (e.g., user data or model weights). FHE is useful in PPML as linear operators (matrix multiplications, convolutions, etc.) account for a majority of computations in modern ML models. Previously, CryptoNets Dowlin et al. (2016), HCNN Badawi et al. (2018), TAPAS Sanyal et al. (2018), LoLa Brutzkus et al. (2019), and Faster CryptoNets Chou et al. (2018) have explored the application of FHE in PPML. Unfortunately, the computational complexity of FHE is quite high and can result in several orders of magnitude slowdown compared to insecure computing. Another way to support linear computations is Secret Sharing (SS) Damgård et al. (2012). PPML typically assumes two parties, the user and the model owner. SS transforms the data of each party into randomly split shares. Each share is hold by one party, and each party only sees its own share but not the full value, ensuring data privacy. Addition of two encrypted values, as well as multiplication between an encrypted value and a plaintext number, can be done locally with only simple operations. Therefore, the linear operators that involve the encrypted user data and the plaintext weights can be done efficiently. Gazelle Juvekar et al. (2018) and DELPHI Mishra et al. (2020) have used SS to replace FHE for higher online processing speed. Nevertheless, FHE is still needed during offline pre-processing to prepare the share values. The remaining challenge is handling nonlinear operators such as ReLU and MaxPool. Garbled Circuit (GC) Yao (1986) takes the encrypted boolean representations of the two parties’ input data, and securely computes an arbitrary boolean function composed of AND and NOR gates. Most existing PPML systems use GC to compute nonlinear operators Juvekar et al. (2018); Liu et al. (2017); Mishra et al. (2020); Mohassel & Zhang (2017); Rouhani et al. (2018). GC processing requires heavy cryptographic computations (e.g., AES encryption) and frequent communication between the two parties, and thus incurs significant overheads compared to insecure nonlinear processing. PPML protocol. In this work, we follow the overall execution flow of the state-of-the-art PPML system, DELPHI Mishra et al. (2020). The protocol consists of two phases: an offline pre-processing phase, and an online inference phase. During offline pre-processing, we use FHE algorithms to generate the secret shares that will be used by the online SS scheme to compute the linear operators. Specifically, for a linear operator $y_i = W_i \cdot x_i$, the user and the model owner each randomly samples a vector, $r_i$ and $s_i$, respectively. The user sends $\text{Enc}(r_i)$ (encryption of $r_i$) to the model owner, who homomorphically computes $\text{Enc}(W_i \cdot r_i - s_i)$ using FHE. The user receives and decrypts this result to keep $W_i \cdot r_i - s_i$. We also generate the GC boolean function for the nonlinear operators. For example, the user creates a GC function $f(a) = \text{ReLU}(a + (W_i \cdot r_i - s_i)) - r_{i+1}$ for the ReLU operator $x_{i+1} = \text{ReLU}(y_i)$, and sends it to the model owner. In the online inference phase of a linear operator, the two parties start with each holding a share of the input, i.e., $r_i$ by the user and $x_i - r_i$ by the model owner. These shares are either from the results of the previous operator, or the user calculates $x_i - r_i$ and provides it to the model owner if this is the first layer. The model owner then evaluates $W_i \cdot (x_i - r_i) + s_i$ on its share. The user already has $W_i \cdot r_i - s_i$ from the pre-processing phase. We can verify that these two values are exactly the shares of the output, i.e., summed up to $W_i \cdot x_i = y_i$. Thus we have maintain the induction condition. For nonlinear operators, the online inference uses GC. We use ReLU as an example, $x_{i+1} = \text{ReLU}(y_i)$. The model owner has the GC function $f(a)$ from the offline phase. It sets $a$ to its share of $y_i$, i.e., $a = W_i \cdot (x_i - r_i) + s_i$, and then evaluates $f(a)$ (involving heavy computation and communication) to obtain $\text{ReLU}(y_i) - r_{i+1} = x_{i+1} - r_{i+1}$, which is a valid share of the input to the next operator. The user holds the other share $r_{i+1}$. ### 2.2 RELATED WORK In the above PPML protocol, SS has made the online computations of linear operators almost as cheap as the original insecure processing, and GC offers general compute capability to support unmodified nonlinear operators to ensure the same accuracy level. However, the use of GC causes severe communication overheads, which become the main performance bottleneck (over 300× slower than linear computations in DELPHI Garimella et al. (2022)). It is therefore necessary to focus on reducing the cost of nonlinear operators to speed up the PPML processing. Recently there have been various proposals to address this issue. Some designs change the nonlinear operator computations from ReLU to more crypto-friendly alternatives. DELPHI Mishra et al. (2020) replaced part of the nonlinear operators with linear approximation to exploit the latency-accuracy tradeoff, using neural architecture search (NAS) techniques. SAFENet Lou et al. (2021) also used NAS to apply approximation, but at more fine granularity to reduce the accuracy impact. Circa Ghodsi et al. (2021) reconstructed ReLU into a sign test (by GC) plus a multiplication (by SS), in order to reduce the processing cost. Other solutions reduce (i.e., prune) the amount of ReLU operators in existing neural network structures. CryptoNAS Ghodsi et al. (2020) rearranged the ReLU operators and used a macro-search algorithm, ENAS, to search for a network with fewer nonlinear operators. Sphynx Cho et al. (2022a) instead used micro-search approaches to design its building blocks more thoroughly to achieve higher accuracy. DeepReDuce Jha et al. (2021) pruned the model in a more fine-grained manner at the channel level, and further improved accuracy through knowledge distillation. SNL Cho et al. (2022b) was inspired by the parameterized ReLU and realized pixel-level ReLU pruning. SENet Kundu et al. (2023a) proposed the concept of ReLU sensitivity, which distinguished the importance of different nonlinear operators and realized automated ReLU pruning. Figure 1: Main building blocks of the Seesaw search space. The nonlinear ReLU operator is always placed after an element-wise ADD to save nonlinear computations. 3 DESIGN Previous PPML designs that aimed to reduce the nonlinear cost (Section 2.2) suffered from a common limitation: they merely reduced the ReLU operators without reconsidering the overall network architecture. This inevitably decreases the representation capacity of the model. Since the representation capacity is jointly determined by both the linear and nonlinear operators, our key idea is to compensate for the accuracy loss caused by reduced nonlinear operators, by (1) adding more linear operators in the model, and (2) reusing the remaining nonlinear outputs as much as possible. We thus propose Seesaw, a one-shot NAS method to automatically search for crypto-friendly model architectures for PPML, with the best accuracy under the given budget for nonlinear operators, i.e., the ReLU budget. 3.1 DESIGN SPACE Seesaw uses two ways to compensate for the loss of nonlinear operators. Accordingly, two building blocks are added to its search space, as illustrated in Figure 1. Figure 1a shows a sampling block, which substitutes a traditional Conv-ReLU block by enabling multiple parallel branches with various linear operators Szegedy et al. (2015; 2016). The branches can be convolutions with different kernel sizes (e.g., $1 \times 1$, $3 \times 3$, $5 \times 5$), depth-wise separable convolutions, dilated convolutions, pooling, or even a direct skip connection. These independent branches enhance the model representation capacity by extracting multiple and different scales of features. While Sphynx Cho et al. (2022a) and CryptoNAS Ghodsi et al. (2020) also used up to four linear operators in a block, our sampling block is designed to contain much more branches to increase the expressivity. Note that all branches keep the original data shape and size, so their outputs can be weighted and combined with an element-wise ADD. The final ReLU may be pruned, i.e., replaced with an Identify operator, to meet the overall ReLU budget, as described in Section 3.3. Figure 1b shows an aggregation block, which aggregates the outputs of previous ReLU operators in the model. The goal of such aggregation is to maximally reuse the limited ReLU outputs remained in the pruned model, not only by the immediately succeeding block, but also potentially by all the following blocks, as shown in the overall supermodel in Figure 2. This helps prevent feature loss and overfitting Szegedy et al. (2015; 2016). Aggregating the ReLU outputs at different positions of the neural network like this is also another way to introduce extra nonlinear nature to the model. Each of these previous ReLU outputs first passes a convolution kernel to reduce the resolution. Then an element-wise ADD operator aggregates these data before feeding them to the final ReLU activation. We point out two key points in both building blocks. First, both blocks place the (possible) nonlinear ReLU after an ADD operator. In contrast to the CONCAT operators used in CryptoNAS Ghodsi et al. (2020) and Sphynx Cho et al. (2022a), ADD results in a smaller data size after aggregation, and thus reduces the amounts of nonlinear operations for the following ReLU. Actually, because Seesaw intentionally employs a large number of branches, using CONCAT would lead to significantly higher cost for each ReLU (by a factor equal to the branch count), and thus limit the total number of ReLU operators allowed in the model. We present a detailed comparison in Section 4.2 to demonstrate the benefit. Second, in both blocks, the branches are accumulated according to certain learnable weight parameters $\beta_{i,j}$. We incorporate the training of these weights into the overall training process rather than separately determining them afterwards, as discussed later in Section 3.3. The weighted output also helps stabilize the training process, by suppressing the gradient explosion or vanishing issues. Finally, the sampling blocks and the aggregation blocks are used to construct an over-parameterized supermodel in Seesaw (Figure 2). Each aggregation block is preceded by several sampling blocks (i.e., \(m_i\)). The output of each ReLU is forwarded to all the aggregation blocks after it through residual connections He et al. (2016), ensuring maximal nonlinear reuse. The use of residual connections not only avoids information loss and enables nonlinear operator reuse, but also speeds up training by preventing vanishing gradients. Several prior designs like CryptoNAS Ghodsi et al. (2020) and Sphynx Cho et al. (2022a) also used residual connections and mainly followed existing topologies like ResNet He et al. (2016) and NASNet Zoph et al. (2018). We emphasize that Seesaw uses much more residual connections beyond the original insecure network model, and for a completely different purpose of reusing the ReLU outputs in order to increase the representation capacity. ### 3.2 Pruning Methods From the design space in Section 3.1, we see that the supermodel contains the following parameters: 1. The weight \(\beta_{i,j}\) of the linear operator on the \(j\)-th branch in the \(i\)-th sampling/aggregation block. 2. The weight \(\alpha_i\) (binarized to \{0, 1\}) to decide the nonlinear operator (ReLU or Identity) in the \(i\)-th sampling block. Seesaw applies pruning to the \(\beta_{i,j}\) parameters of sampling blocks (but not aggregation blocks, see Section 4.3) and the \(\alpha_i\) parameters of sampling blocks. **Pruning linear branches.** Generally, we would need to prune the branches in each block to derive the final network architecture from the over-parameterized model. The pruning approach in traditional NAS in the insecure scenario is conservative, usually keeping only one of the multiple branches in each block, mainly to restrict the model size and the computation demand Cai et al. (2019); Liu et al. (2018); Wu et al. (2019). However, in PPML, the computational bottleneck does not lie in the linear operators. Therefore Seesaw could retain more branches in each block without worrying about the latency issue, thus increasing its representation capacity to compensate for accuracy loss. On the other hand, pruning unimportant branches can help prevent model overfitting and improve its generalization ability. More linear operators do not guarantee improved accuracy. This is still an important issue in the PPML scenario. Therefore, Seesaw applies pruning to the branches in each block. Specifically, during training, Seesaw adopts a sparsity constraint to force the branch weights \(\beta_{i,j}\) in the same block to become sparse. However, we cannot directly use the typical L1/L2 regularization which encourages all weights to be small. As discussed above, we still want to keep many important branches to improve the representation capacity, while only discarding unimportant branches. Therefore, we prefer some weights to be large while the others being small, i.e., a distribution with large variance. So we propose a new penalty function \(L_{\text{lin}}\) to maximize the variance of the branch weights in each block, \[ L_{\text{lin}} = - \sum_i \sigma^2[\beta_{i,j}, \forall j], \] where \(\sigma^2\) computes the variance. After finishing training, we prune the branches with weights smaller than an empirically determined threshold, i.e., 0.001 in our experiments. Section 4.3 reveals the relationship between the pruning threshold and the model accuracy. **Pruning nonlinear operators.** We also need to prune the total number of nonlinear ReLU operators in the model, by selectively enabling a subset of the sampling blocks to use ReLU, while the others use Identity operators. This is controlled by the weight \(\alpha_i\) for the \(i\)-th sample block. Similar to ProxylessNAS Cai et al. (2019), these parameters are binarized every epoch to ensure only one between ReLU and Identity is activated while searching. The total nonlinear count of the supermodel is calculated based on whether this weight is enabled and the size of the corresponding intermediate data. This count is used to penalize the model if deviating from the given ReLU budget $B_{\text{ref}}$, $$L_{\text{nonlin}} = \left| \sum_i \alpha_i H_i W_i C_i - B_{\text{ref}} \right|,$$ where $H_i$, $W_i$ and $C_i$ are height, width and number of channels of the feature map at the $i$-th layer, respectively. After training, the ReLU operators are kept or pruned according to the binarized weights. ### 3.3 Search Strategy The loss function of Seesaw incorporates the linear and nonlinear pruning methods in Section 3.2, $$L = L_{\text{CE}} + \lambda_{\text{lin}} \times L_{\text{lin}} + \lambda_{\text{nonlin}} \times L_{\text{nonlin}}$$ where $\lambda_{\text{lin}}$ and $\lambda_{\text{nonlin}}$ are weighting hyperparameters. $L_{\text{lin}}$ and $L_{\text{nonlin}}$ are from Equations (1) and (2). $L_{\text{CE}}$ is the original cross-entropy loss. Given a sampled network $M$ and a data-label pair $(X, Y)$, the cross entropy between the prediction $M(X)$ and the ground-truth label $Y$ is $L_{\text{CE}} = \text{CE}(M(X), Y)$. Such a loss function allows us to balance the model accuracy and the ReLU budget by simultaneously optimizing the loss value and regularizing the linear and nonlinear costs. For the network architecture search strategy, traditional NAS typically constructs an over-parameterized supermodel encompassing all building blocks and potential branches in the search space. This supermodel contains numerous architecture parameters (e.g., for each branch and for each block) that must be first sampled to generate a specific network to train Cai et al. (2019); Liu et al. (2018). The search space from which network architectures are sampled is too large, making the training converge slowly. Some approaches try to directly train on the dataset, then optimize via a specific search algorithm, and finally do retraining Tan et al. (2019); Zoph et al. (2018). This process can still be computationally intensive and time-consuming. Seesaw uses a novel search strategy, which only includes the existence of nonlinear operators (i.e., $\alpha_i$) in the search space, and treats the branch weights for the large number of linear operators (i.e., $\beta_{i,j}$) similarly to the model weights and to be updated during training without extra sampling. This greatly reduces the search space, accelerating the convergence when searching the best network architectures. Algorithm 1 shows the pseudocode of the Seesaw search algorithm. Seesaw takes the input training dataset $D_T$, the validation dataset $D_V$, and the nonlinear budget $B_{\text{ref}}$. It trains the supermodel and searches the network architecture iteratively in a continuous loop until converged. In each iteration, it samples a network architecture from the search space (i.e., sample $\alpha_i$ values at Line 10), and uses the training dataset to train the network weights as well as the branch weights $\beta_{i,j}$ in the sampled model (Lines 9 to 12). After a certain number of warm-up training epochs, it starts to train the architecture parameters, i.e., the NAS modules (Lines 2 to 8). The NAS modules are sampled to determine $\alpha_i$, i.e., the existence of each ReLU operator (Line 4). We use the overall loss $L$ from Equation (3) to update the NAS modules (Lines 6 and 7). The network weight parameters are now frozen. The use of the validation set enhances the robustness of the architecture. After converged, the optimized network architectures can be derived based on the trained supermodel. ### 4 Evaluation We compare Seesaw with several previous PPML methods, including DELPHI Mishra et al. (2020), CryptoNAS Ghodsi et al. (2020), Sphynx Cho et al. (2022a), SNL Cho et al. (2022b), SENet Kundu et al. (2023a), as well as unmodified baseline models. The baseline models are ResNet-18 and ResNet-34 He et al. (2016), with CIFAR100 Krizhevsky et al. (2009) and ImageNet Deng et al. (2009). We complete the search, training, and testing on machines with an Intel Xeon Gold 6145 CPU, 8 NVIDIA PH402 GPUs, and 1 Gbps Ethernet. We leverage the DELPHI framework to perform real performance experiments. We set 100 epochs for searching and 150 epochs for retraining with a decreasing learning rate from 0.05 to 0. $\lambda_{\text{lin}}$ and $\lambda_{\text{nonlin}}$ are initialized to 0.001 and 0.1, respectively. ### 4.1 Comparison with State-of-the-Art Figures 3 and 4 show the comparison between our Seesaw and state-of-the-art PPML methods, on CIFAR100 and ImageNet, respectively. Following the common practice in previous work, we represent the runtime latency using the number of ReLU operators. The results clearly demonstrate the efficiency of Seesaw, in terms of the Pareto optimal frontier between the classification accuracy and the runtime latency. The ability to achieve higher accuracy with fewer nonlinear operators makes Seesaw a highly efficient and promising approach for PPML inference. Specifically, on the CIFAR100 dataset (Figure 3), if we look at the accuracy level of 74% for example, Seesaw only need about 36.8K ReLU operators, which are $1.36\times$ and $2.71\times$ fewer than the next best designs, SENet and SNL. On the other hand, when doing an iso-latency comparison at 50K ReLU, Seesaw improves the accuracy to 75.52%, which is 0.79% better than SENet and 1.45% better than SNL. The improvements over SENet are relatively small, and sometimes Seesaw has worse accuracy than SENet at high ReLU budgets. This is because SENet applies more fine-grained pixel-level ReLU pruning, which reduces the accuracy loss but requires more complex search and training methods. On ImageNet, Seesaw outperforms the other proposals more significantly. At iso-accuracy of 71%, Seesaw saves $1.68\times$ ReLU counts over SENet. At iso-latency of 1000K ReLU counts, Seesaw achieves 75.75% accuracy, 4.59% higher than SENet. Figure 7: Comparison between the ADD and CONCAT operators on CIFAR100. Figure 8: Branch weight variance distribution of sampling blocks at different locations on CIFAR100, under a ReLU budget of 36,684. Note that when the ReLU budget is abundant, Seesaw can even outperform the accuracy of the original insecure ResNet models. This is expected because Seesaw uses more linear operators. In the insecure scenario, such accuracy increases come at the cost of longer inference latency. However in PPML, the latency is dominated by ReLU, for which Seesaw has similar or fewer operators. Figures 5 and 6 do the above comparisons using real execution performance in terms of inference latencies. Even with the extra cost of computing more linear operators, Seesaw can still achieve better accuracy-latency tradeoffs compared to the baselines. The general trend in these figures is similar to the previous results using ReLU counts. 4.2 Ablation Study: ADD vs. CONCAT We compare the performance of ADD and CONCAT operators in Figure 7. We design another sampling block that is similar to Figure 1a but uses CONCAT instead of ADD. We then apply the same Seesaw search algorithm to find the best network architecture under different ReLU budgets and retrain the new models. For a fair comparison, we use the same ReLU budgets for the ADD- and CONCAT-based models. From the figure we see that, the CONCAT-based models can achieve good accuracies, but still not as high as the ADD-based models, exhibiting an average gap of 7.0%. The accuracy difference is particularly significant when the ReLU budget is tight. Essentially, using ADD operators allows for more linear operators in the model and thus higher expressivity without consuming extra nonlinear operators, which is more efficient. 4.3 Ablation Study: Pruning Methods Section 3.2 introduces how to prune linear operator branches in each sampling block. In our experiments, we initialize 27 branches of different linear operators in every sampling block. We evaluate three different pruning schemes: keeping all branches (All), keeping a fixed number of branches with the highest weights (Fixed-1, Fixed-4, Fixed-11), and keeping the branches whose weights exceed the threshold (Threshold-0.1, Threshold-0.001, Threshold-0.00001). As shown in Table 1, All does not achieve the highest accuracy, while our Threshold-0.001 method works the best. Removing branches with low contributions reduces the risk of overfitting. Comparing All and the several Fixed approaches, we see the effectiveness of using more linear operators for feature extraction to improve the model representation capacity and thus the accuracy. However, Table 1: Model accuracy comparison of pruning methods for sampling blocks under different ReLU budgets on CIFAR100. | # ReLU | Pruning method | Acc. (%) | |--------|----------------|----------| | | All | 72.63 | | | Fixed-1 | 66.52 | | | Fixed-4 | 72.02 | | 36,684 | Fixed-11 | 72.53 | | | Threshold-0.1 | 69.47 | | | Threshold-0.001| **73.83**| | | Threshold-0.00001| 73.06 | | # ReLU | Reuse method | Acc. (%) | |--------|--------------|----------| | | None | 70.20 | | | Half | 72.64 | | | All | **73.83**| Table 2: Model accuracy comparison of nonlinear reuse methods under different ReLU budgets on CIFAR100. | # ReLU | Reuse method | Acc. (%) | |--------|--------------|----------| | | None | 72.53 | | | Half | 75.01 | | | All | **75.89**| | # ReLU | Reuse method | Acc. (%) | |--------|--------------|----------| | | None | 69.98 | | | Half | 75.41 | | | All | **76.95**| | # ReLU | Reuse method | Acc. (%) | |--------|--------------|----------| | | None | 72.33 | | | Half | 76.51 | | | All | **77.25**| **Fixed** cannot adapt itself to different sampling blocks at different locations. According to Figure 8, the branch weights of latter sampling blocks tend to have higher variances, which means fewer branches should be retained. Therefore, different sampling blocks prefer different numbers of linear operators, leading to the decision of using a threshold to prune the branches. We further conduct an ablation study on the impact of nonlinear reuse residual links, i.e., input paths of aggregation blocks. The results are listed in Table 2, where three methods are tested: using no nonlinear reuse (None), keeping half (50%) of the residual links with the highest weights (Half), and keeping all links (All). The results indicate that the All scheme achieves the highest accuracy under different ReLU budgets, while None exhibits different degrees of accuracy drop from 3.4% to 6.5%. As a result, different from sampling blocks that apply pruning, aggregation blocks in Seesaw keep all the reuse links activated. These results underscore the efficacy of nonlinear reuse in Seesaw. ### 4.4 Network Architecture Analysis Finally, we illustrate the distributions of the ReLU operators in the optimized network architectures discovered by Seesaw. Figure 9 shows the corresponding weight values for ReLU and Identity at different sampling blocks in two networks with different ReLU budgets. The sampling blocks at the latter stage of the network tend to have higher ReLU weights and would keep the ReLU operators. This observation aligns with the ReLU sensitivity observed in SENet Kindu et al. (2023a). For example, Model-1 in Figure 9a with a small ReLU budget only keeps the last two nonlinear operators. However, Seesaw can also retain some earlier nonlinear operators to if the ReLU budget allows, in order to boost the accuracy. For example, Model-2 in Figure 9b preserves the ReLU at location 3. In contrast, Figure 8 shows that the variance of sampling block branch weights is likely higher towards the backend of the network, reflecting that more linear operators are pruned under the threshold. Combining the above two trends, we get to an interesting observation. An optimized PPML network architecture needs to preserve sufficient nonlinearity in the latter blocks of the model, while at the earlier stage, it can instead increase the linear computations to increase the representation capacity. The two patterns compensate very well, once again validating the design principle of Seesaw. ### 5 Conclusions In this paper, we present Seesaw, a neural network structure search scheme that is tailored to private machine learning inference. Seesaw compensates for the negative accuracy impact of reducing expensive nonlinear operators through adding more linear computations and reusing existing nonlinear results. It incorporates novel pruning and search approaches to efficiently determine the optimized amounts of extra computation and data reuse. Our evaluation shows that Seesaw achieves higher accuracy with fewer nonlinear operations compared to previous proposals. REFERENCES Aliyun. Alibaba Cloud. https://ai.aliyun.com/. Accessed: August 2021. Amazon Web Services. Deep Learning on AWS. https://aws.amazon.com/deep-learning/. Accessed: August 2021. Microsoft Azure. Machine Learning Service, Microsoft Azure. https://azure.microsoft.com/en-us/services/machine-learning/. Accessed: August 2021. Ahmad Al Badawi, Jin Chao, Jie Lin, Chan Fook Mun, Sim Jun Jie, Benjamin Hong Meng Tan, Xiao Nan, Khin Mi Mi Aung, and Vijay Ramaseshan Chandrasekhar. Towards the AlexNet Moment for Homomorphic Encryption: HCNN, the First Homomorphic CNN on Encrypted Data with GPUs. arXiv preprint arXiv:1811.00778, 2018. Baidu. Baidu AI cloud. https://intl.cloud.baidu.com/. Accessed: August 2021. Alon Brutzkus, Ran Gilad-Bachrach, and Oren Elisha. Low Latency Privacy Preserving Inference. In 36th International Conference on Machine Learning (ICML), pp. 812–821, 2019. Han Cai, Ligeng Zhu, and Song Han. Proxylessnas: Direct neural architecture search on target task and hardware. In International Conference on Learning Representations, 2019. Nishanth Chandran, Divya Gupta, Sai Lakshmi Bhavana Obbattu, and Akash Shah. \{SIMC\}:\{ML\} inference secure against malicious clients at \{Semi-Honest\} cost. In 31st USENIX Security Symposium (USENIX Security 22), pp. 1361–1378, 2022. Guoxing Chen, Sanchuan Chen, Yuan Xiao, Yinqian Zhang, Zhiqiang Lin, and Ten H Lai. SgxPectre: Stealing Intel Secrets from SGX Enclaves Via Speculative Execution. In 2019 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 142–157, 2019. Minsu Cho, Zahra Ghodsi, Brandon Reagen, Siddharth Garg, and Chinmay Hegde. Sphynx: A deep neural network design for private inference. IEEE Security & Privacy, 20(5):22–34, 2022a. Minsu Cho, Ameya Joshi, Brandon Reagen, Siddharth Garg, and Chinmay Hegde. Selective network linearization for efficient private inference. In International Conference on Machine Learning, pp. 3947–3961. PMLR, 2022b. Edward Chou, Josh Beal, Daniel Levy, Serena Yeung, Albert Haque, and Li Fei-Fei. Faster cryptonets: Leveraging sparsity for real-world encrypted inference. arXiv preprint arXiv:1811.09953, 2018. Google Cloud. Deep Learning VM. Google Cloud. https://cloud.google.com/deep-learning-vm/. Accessed: August 2021. Ivan Damgård, Valerio Pastro, Nigel Smart, and Sarah Zakarias. Multiparty computation from somewhat homomorphic encryption. In Advances in Cryptology—CRYPTO 2012: 32nd Annual Cryptology Conference, Santa Barbara, CA, USA, August 19-23, 2012. Proceedings, pp. 643–662. Springer, 2012. Jia Deng, Wei Dong, Richard Socher, Li Jia Li, and Fei Fei Li. ImageNet: A Large-Scale Hierarchical Image Database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255, 2009. Nathan Dowlin, Ran Gilad-Bachrach, Kim Laine, Kristin Lauter, Michael Naehrig, and John Wernsing. CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy. In 33rd International Conference on Machine Learning (ICML), pp. 201–210, 2016. Karthik Garimella, Zahra Ghodsi, Nandan Kumar Jha, Siddharth Garg, and Brandon Reagen. Characterizing and optimizing end-to-end systems for private inference. arXiv preprint arXiv:2207.07177, 2022. Craig Gentry. Fully Homomorphic Encryption Using Ideal Lattices. In 41st Annual ACM Symposium on Theory of Computing (STOC), pp. 169–178, 2009.
yBZd6mCWXd
However, the authors omit comparing it with the version that removes alpha in the ablation study (refer to Table 7). This omission makes it challenging to discern the contribution of this modification.
WI3D: Weakly Incremental 3D Detection via Visual Prompts Anonymous authors Paper under double-blind review Abstract Class-incremental 3D object detection demands a 3D detector to locate and recognize novel categories in a stream fashion, while not forgetting its previously learned knowledge. However, existing methods require delicate 3D annotations for learning novel categories, resulting in significant labeling cost. To this end, we explore a label-efficient approach called Weakly Incremental 3D object Detection (WI3D), which teaches a 3D detector to learn new object classes using cost-effective 2D visual prompts. For that, we propose a framework that infuses (i) class-agnostic pseudo label refinement module for high-quality 3D pseudo labels generation, (ii) cross-modal knowledge transfer for representation learning of novel classes, and (iii) reweighting knowledge distillation for preserving old class information. Extensive experiments under different incremental settings on both SUN-RGBD and ScanNet show that our approach learns well to detect novel classes while effectively preserving knowledge of base classes, and surpasses baseline approaches in WI3D scenarios. 1 Introduction Existing 3D detectors (Qi et al., 2019; Misra et al., 2021; Rukhovich et al., 2022; Wang et al., 2022b) have achieved remarkable performance learning to detect predefined classes in a static 3D environment. However, novel-class objects will emerge when deploying existing methods to in-the-wild and dynamic environments. To generalize the model well to novel classes, a straightforward approach would be to combine existing datasets with novel-class objects and train the model from scratch. However, this approach becomes impractical when frequent updates are necessary, as training on the entire dataset would be time-consuming (Cermelli et al., 2022). Meanwhile, fine-tuning the detector on novel-class samples alone will typically lead to catastrophically forgetting base classes, which is caused by changes of model parameters to accommodate new samples without accessing previous samples. Recently, incremental learning, which studies how to incorporate novel classes by training only on novel-class samples while preventing catastrophic forgetting issues, becomes eminent in various 2D and 3D vision tasks (PourKeshavarzi et al., 2021; Wang et al., 2022a; Zhao & Lee, 2022; Yang et al., 2023). Prior works (Zhao & Lee, 2022; Zhao et al., 2022; Liang et al., 2023) have made initial attempts in the field of class-incremental 3D object detection using delicate 3D annotation for novel-class objects. However, acquiring a large number of fully-labeled point cloud data is prohibitively expensive due to the difficulty of both 3D data collection and annotation (Ren et al., 2021). Inspired by the human visual system that excels at learning new 3D concepts through 2D images, we propose to incrementally introduce novel concepts to a 3D detector with the visual prompts generated from a cost-free 2D teacher other than revisiting 3D annotations for both base and novel classes as shown in Fig. 1. We term this new task as Weakly Incremental 3D object Detection (WI3D), which incrementally updates the model without any manual annotation for the novel classes. To our best knowledge, we are the first attempt to address WI3D, an unexplored yet important problem. WI3D has two major challenges: 1) how to introduce novel classes to a 3D detector through 2D visual prompts incrementally, and 2) how to retain base classes knowledge without revisiting any 3D annotations. Recent studies (Lu et al., 2023; Peng et al., 2022) have made great initial attempts to directly generate 3D pseudo labels from 2D predictions. However, these approaches could not sufficiently address the issue of the noise within the pseudo labels. The existence of noisy, inaccurate, Figure 1: Illustration of previous class-incremental 3D object detection (left) and WI3D (right). Previous class-incremental 3D object detection methods rely heavily on the continual provisions of human annotations on the point cloud for novel classes. In contrast, we explore WI3D, a new task that introduces novel concepts to a 3D detector through 2D images to reduce the heavy cost of annotating the point cloud. and incomplete pseudo labels severely deteriorates the detection performance in WI3D. Furthermore, the widely adopted knowledge distillation techniques (Zhao & Lee [2022], Zhao et al. [2022]) treat different Regions-of-Interests equally, leading to the failure to learn discriminative region features among the sparse and cluttered point cloud scenes. To address the above issues, we propose a novel framework for WI3D with both intra- and inter-modal teachers, where the intra-modal teacher is a base 3D detector, and the inter-modality teacher is a 2D foundational model. Our framework is supervised by 1) the pseudo labels generated by both teachers and 2) concept representation learning in feature space. To obtain accurate pseudo labels, we propose a class-agnostic pseudo-label refinement module by learning the general and intrinsic latent relationship between the bounding boxes and the corresponding point cloud. In addition to incrementally teaching the current detector to localize novel objects in an explicit way, we also leverage an implicit way of supervision by learning in feature space. We propose an auxiliary cross-modal knowledge transfer for WI3D, which leverages bipartite matching to transfer color and texture-aware information from the visual prompts to enhance the 3D object representation. Finally, we explore a reweighting knowledge distillation approach that can discern and select the valuable knowledge of the existing classes, leading to further improvements in performance. To summarize, our contributions are listed as the following: • We introduce Weakly Incremental 3D object Detection (WI3D), a novel task that generalizes the base 3D detectors well to novel classes via cost-effective visual prompts only. • We analyze the challenges in WI3D and propose a robust and effective framework for WI3D, which contains a class-agnostic pseudo label refinement module for high-quality pseudo labels generation and learning concept representation learning in feature space for both base and novel classes. • Extensive experiments on two benchmark datasets, SUN RGB-D and ScanNet, illustrate the effectiveness of our methods under the low-cost setting of WI3D scenarios. 2 RELATED WORK We first briefly review existing methods for class-incremental detection in 2D and 3D. Then, we introduce work on weakly-supervised 3D detection and the design of existing 3D object detectors. Class-Incremental Detection explores the task of incrementally learning and detecting new classes over time while preserving the original capabilities of the detector as much as possible. Peng et al. (2020); Yang et al. (2022); Feng et al. (2022); Liu et al. (2023b) have made great efforts to class-incremental image object detection. Concurrently, several attempts for class-incremental 3D object detection are proposed. SDCoT (Zhao & Lee [2022]) proposes a static-dynamic co-teaching method for class-incremental 3D object detection. DA-CIL (Zhao et al. [2022]) proposes a 3D domain adaptive class-incremental object detection framework with a dual-domain copypaste augmentation method to adapt the domain gradually. Recent work I3DOD (Liang et al. [2023]) proposes a task-shared prompts mechanism to learn the matching relationships between the object localization information and category semantic information for class-incremental 3D object detection. In this paper, we explore a new paradigm, WI3D, to study how 2D knowledge enables a 3D detector to learn novel objects continually, without the reach for labor-consuming 3D annotations for the novel classes. **Weakly-supervised 3D detection** studies a way to train a 3D detector without 3D instance annotation, such as object center annotations (Xu et al., 2022), scene-level class labels (Ren et al., 2021) and so on. The recent work, OV-3DET (Lu et al., 2023) introduces open-vocabulary 3D object detection, which directly utilizes a pre-trained 2D model to generate pseudo labels for a 3D detector. However, OV-3DET mainly focuses on the ability to associate each 3D instance with an appropriate text prompt, which can’t handle the problems of incremental localization and incremental semantic recognition of emerging objects in the scene. In addition, how to acquire accurate 3D pseudo labels from 2D predictions remains unexplored in OV-3DET. In this paper, we study the potential of utilizing 2D visual prompts in weakly incremental 3D detection scenario by learning from denoised pseudo labels and regional concept representation. **3D Object Detectors** requires a model to localize objects of interest from a 3D scene input. (Qi et al., 2019) (Zhang et al., 2020) (Misra et al., 2021) (Liu et al., 2021) manage to operate directly on the point clouds for 3D object detection. VoteNet (Qi et al., 2019) and H3DNet (Zhang et al., 2020) achieve end-to-end 3D object detection based on sampling, grouping, and voting operators designed especially for point clouds. 3DETR (Misra et al., 2021) and GroupFree3D (Liu et al., 2021) extend the transformer (Vaswani et al., 2017) architecture to 3D object detection. In our paper, we adopt the modified VoteNet (Zhao & Lee, 2022) as our detection backbone, and explore how to extend a base 3D detector with the ability to detect objects of novel classes through 2D visual prompts. ### 3 METHODOLOGY In this section, we first provide the task setting of WI3D and present the noise of 3D pseudo labels directly generated from 2D predictions in Sec. 3.1 and Fig. 2. Then, we provide the overview of our framework for WI3D in Sec. 3.2, which supervises the 3D detector with both the denoised pseudo labels (Sec. 3.3) and representation learning in feature space (Sec. 3.4). #### 3.1 Problem Definition **Task Definition.** Given a base 3D detector capable of localizing and recognizing the base category set $C_{base}$ from a point cloud, WI3D extends its capacity to detecting a larger category set $C_{all} = C_{base} \cup C_{novel}$ with only visual prompts for $C_{novel}$ from off-the-shelf 2D models. Here, we assert that each 3D scene is constructed from RGB-D images. **Coarse Pseudo Labels Generation.** To generate novel-class pseudo labels for $S^{3D}$ without point-level annotation, we leverage visual prompts generated by a cost-free 2D teacher $T^{2D}$. Despite the one-to-one correspondence between points and pixels for each scan collected by RGB-D cameras, it is hard to directly localize 3D object and estimate a tight 3D bounding box with only a 2D one. Thus, we adopt a simple way to generate coarse 3D pseudo labels from 2D predictions, as mentioned in (Peng et al., 2022). Figure 3: **Pipeline of our proposed WI3D.** We train a 3D student detector $S^{3D}$ on (1) 3D pseudo labels and (2) visual concept representations generated by both the inter-modal teacher $T^{2D}$, and intra-modal teacher $T^{3D}$. To be specific, the 3D pseudo labels serve as direct supervision for $S^{3D}$, and are generated by denoising and mixing the predictions of $T^{2D}$ and $T^{3D}$. Concurrently, the visual concept representation learning includes the cross-modal regional feature alignment for novel class and reweighting knowledge distillation for base classes. (Color is used for visualization only.) **Noise analysis.** Specifically, projecting 2D bounding boxes into 3D space leads to the following noise: - **Projection Migration:** As shown in Fig. 2(a), background pixels within 2D bounding boxes lead to the displacement of the 3D bounding box positions. - **Scale Ambiguity:** As shown in Fig. 2(b), the scale ambiguity problem is caused by sparse points captured from the surface of an object, leading to untight dimension estimation for the pseudo label. - **Overlapped Boxes:** As shown in Figure Fig. 2(c), duplicated estimations on the same instance will occur when fusing multi-frame predictions (e.g., the red and yellow pseudo labels represent the predicted results from two consecutive frames, respectively). ### 3.2 Pipeline Overview Our pipeline is initialized with a base 3D detector $T^{3D}$, which is capable of detecting $C_{base}$. As is shown in Fig. 3, in order to train a 3D detector $S^{3D}$ for both incrementally detecting novel classes and retaining base knowledge without any reach of 3D annotations for either base or novel classes, we seek supervision from both pseudo labels and feature space. More accurate 3D pseudo labels become within reach for $S^{3D}$ by further adopting the proposed pseudo label refinement module in Sec. 3.3. Additionally, cross-modal knowledge transfer and reweighting knowledge distillation in Sec. 3.4 serve as great feature-level supervision for class-incremental learning. ### 3.3 Pseudo Label Refinement **Class-Agnostic Pseudo Label Refinement.** To generate tight 3D pseudo labels from 2D predictions, we propose the class-agnostic Pseudo-label ReFinement (PRF) module (Fig. 4). To be specific, PRF includes a light-weight PointNet (Qi et al., 2017) encoder $PN$ to encode contextual information from point cloud $\tilde{p}$, and a MLP based box encoder $MLP_{box}$ to encode positional information of the 3D bounding box $\tilde{b}^{3D}$, and a decoder $D$ to obtain the box offset. $$\hat{b}^{3D} = \tilde{b}^{3D} + PRF(\tilde{p}, \tilde{b}^{3D}) = \tilde{b}^{3D} + D\left([PN(\tilde{p}); MLP_{box}(\tilde{b}^{3D})]\right).$$ (1) Here, $[\cdot; \cdot]$ is the concatenation operation. Since PRF is class agnostic, we can train it on base classes and adopt it directly to novel classes during class-incremental training. Figure 4: The design of Pseudo Label Refinement (PRF). Our proposed PRF first encodes the coarse 3D box coordinates and the normalized point cloud. Then, these two features are concatenated and used for box coordinate refinement. (Color is used for visualization only.) We decouple and refine the parameters of each 3D box, including its location, dimensions, and orientation, and solve the problem caused by Projection migration and Scale ambiguity. However, the aforementioned approach does not address the issue of Box overlap. In order to address this, we propose a method to determine the validity of each 3D pseudo label by incorporating an additional Binary Classification Header (BCH). The inputs to our BCH are the contextual feature from point cloud and box-aware embedding and the outputs of BCH are the binary probability. We use the Hungarian algorithm (Kuhn, 1955) to match each annotation of the base class with an input pseudo label, and the coarse pseudo-labels that do not match any annotation will be masked by BCH. During inference, the bounding box is considered valid only when the probability of presence exceeds the probability of absence. 3.4 Concept Representation Learning Beyond the learnable denoising module for generating high-quality 3D pseudo boxes, we introduce auxiliary objectives to enhance the student’s robust representation learning capability in an implicit way. Cross-modal Knowledge Transfer. Because of the occlusion of objects in 2D images, background objects are often included when extracting visual features of a specific region, which will further lead to confusing feature representations when serving as the source of supervision. To this end, we propose Cross-modal Knowledge Transfer (CKT) to help the model learn robust feature representations. Inspired by (Lin et al., 2022), we frame the cross-modal alignment assignment as a bipartite matching problem. In practice, we project the box estimation generated by $S^{3D}$ onto the corresponding image and build the matching matrix by calculating IoU between the projected 3D box with the 2D predictions generated by $T^{2D}$. The cost function for bipartite matching can be formulated as follows: $$\min \sum_i \sum_j m_{ij} \cdot \text{IoU}(\text{Project}(B^{3D}_i), B^{2D}_j)$$ $$\sum_i m_{ij} = 1$$ (2) where $m_{ij} \in \{0, 1\}$ indicates whether it matches, and IoU represents computing the intersection over union. Then a pretrained image region encoder $E^{2D}$ is used to extract the features of the novel-class instance $R^{2D}_j$ from image, denoted as: $$F^{2D}_j = E^{2D}(R^{2D}_j)$$ (3) For 3D proposal $B^{3D}_m$ paired with corresponding $B^{2D}_n$, we feed the proposals’ features $F^{3D}_m$ into $B^{3D}_m$ into an MLP-based projection head $H^{3D}$, to encode the 3D proposal features into the same feature space of $F^{2D}_n$, denoted as $F^{3D}_m = H^{3D}(F^{3D}_m')$. Finally, we design a dynamic instance-level knowledge transferring loss based on the negative cosine similarity (Chen & He, 2020), assigning different weights to different instance samples: $$L_{ckt} = - \sum_{m \in M, n \in N} \frac{F^{3D}_m}{\|F^{3D}_m\|_2} \cdot \frac{F^{2D}_n}{\|F^{2D}_n\|_2}$$ (4) Intra-modal Base Knowledge Distillation. To alleviate forgetting issues, existing works (Zhao & Lee, 2022; Zhao et al., 2022) use knowledge distillation (Hinton et al., 2015) to preserve learned... knowledge. However, previous work usually directly utilizes all the predicted responses and treat knowledge equally, fails to capture discriminative region features in sparse and cluttered point cloud scenes. Here, we argue that not all features in the old model must be distilled, and we propose an Reweighting Knowledge Distillation scheme (RKD) that focuses on distilling the region with more considerable influence. The distillation loss \( L_{rkd} \) can be computed as: \[ L_{rkd} = \frac{1}{K} \sum_{i \in \Phi_B} \alpha_i \left( ||F^S_i - F^T_i||_2 + ||l^S_i - l^T_i||_2 \right) \] where \( \Phi_B \) is the set of indices of base-class proposals and \( K \) is the total number of proposals; \( F_i \) and \( l_i \) are the features and classification logits of \( i \)th proposal respectively; \( \alpha_i \) is reweighting modulation factor obtained based on the proposal objectness \( o_i \), denoted \( \alpha_i = \frac{o_i}{\sum_{i=1}^{K} e^{-o_i}} \); the superscripts \( S \) and \( T \) represent the features from the student and teacher model. ### 3.5 Training Objectives **Base training.** We train the modified VoteNet (Zhao & Lee, 2022) on base class annotations with the detection loss \( L_{det} \) (Qi et al., 2019), which is defined as \[ L_{det} = \alpha_1 L_{vote} + \alpha_2 L_{obj} + \alpha_3 L_{box} + \alpha_4 L_{sem-cl} \] Here, \( \alpha_1, \alpha_2, \alpha_3, \alpha_4 \) are set as 1, 0.5, 1, 0.2, and \( L_{vote}, L_{obj}, L_{box}, L_{sem-cl} \) stands for vote regression, proposal objectness classification, box regression, and proposal semantic classification respectively. Note that we also train PRF on \( C_{base} \) in this stage, where \( L_{PRF} = L_{box} \). **Weakly Incremental Learning.** The supervision of the WJ3D comes in two folds: explicit detection training on the pseudo labels generated by \( T^{2D} \) and \( T^{3D} \) with \( L_{det} \), and the feature-space learning with instance-level knowledge transfer \( L_{ckt} \) and knowledge distillation of base classes \( L_{rkd} \). The loss function can be defined as \[ L = \beta_1 L_{det} + \beta_2 L_{ckt} + \beta_3 L_{rkd} \] Here, \( \beta_1, \beta_2, \beta_3 \) are set as 1, 10, 5 heuristically. ### 4 Experiments We first introduce the datasets, metrics, and implementation details for weakly incremental 3D object detection in Sec. 4.1. Then, we compare our methods with different baseline approaches and prior arts in Sec. 4.2. Afterward, we take out ablation studies to study the effectiveness of the proposed components in Sec. 4.3. Finally, we showcase some visualization results in Sec. 4.4. #### 4.1 Datasets, Metrics, and Implementation Details **Datasets.** Following previous works on class-incremental 3D detection (Zhao & Lee, 2022; Zhao et al., 2022; Liang et al., 2023), we conduct experiments on two widely used datasets, SUN RGB-D (Song et al., 2015) and ScanNet (Dai et al., 2017). SUN-RGBD consists of 10,335 single-view RGB-D scans, where 5,285 are used for training, and 5,050 are for validation. Each scan is annotated with rotated 3D boxes. ScanNet includes 1,201 training samples and 312 validation samples reconstructed from RGB-D sequences. We split the full category set into two non-overlapped subsets into \( C_{base} \) and \( C_{novel} \) according to (Zhao & Lee, 2022). **Metrics.** To compare the performance of different approaches under incremental settings, we adopt \( mAP_{base}, mAP_{novel}, \) and \( mAP_{all} \) as abbreviations for mean Average Precision (mAP) under an IoU threshold of 0.25 for base classes, novel classes, and overall performance, respectively. **Implementation Details.** The input of our model is a point cloud \( P \in \mathbb{R}^{N \times 3} \) representing a 3D scene, where \( N \) is set as 20,000 and 40,000 respectively for SUN RGB-D and ScanNet. Following (Zhao & Lee, 2022), the base training lasts for 150 epochs using an Adam optimizer (Kingma & Ba, 2014) with a batch size of 8, and a learning rate of \( 10^{-3} \) decaying to \( 10^{-4} \) and \( 10^{-5} \) at the 80th and 120th epoch respectively. During weakly incremental learning, we copy weights from \( T^{3D} \) to initialize the student model \( S^{3D} \), and optimize \( S^{3D} \) under the supervision of both refined pseudo labels and feature space. During both training stages, we evaluate \( S^{3D} \) every 10 epochs. All experiments mentioned above are conducted on a single RTX3090 GPU. | Method | \(|C_{\text{novel}}| = 3\) | \(|C_{\text{novel}}| = 5\) | \(|C_{\text{novel}}| = 7\) | |--------------|-----------------|-----------------|-----------------| | | mAP\(_{\text{base}}\) | mAP\(_{\text{novel}}\) | mAP\(_{\text{all}}\) | mAP\(_{\text{base}}\) | mAP\(_{\text{novel}}\) | mAP\(_{\text{all}}\) | mAP\(_{\text{base}}\) | mAP\(_{\text{novel}}\) | mAP\(_{\text{all}}\) | | base-training | 53.84 | - | 58.54 | - | 50.88 | - | | fine-tuning | 1.02 | 35.41 | 11.34 | 1.11 | 32.98 | 17.05 | 0.13 | 27.25 | 19.12 | | freeze-and-add | 53.05 | 9.99 | 40.13 | 56.29 | 5.99 | 31.21 | 47.11 | 1.95 | 15.50 | | SDCoT | 39.64 | 45.38 | 41.36 | 49.27 | 34.08 | 41.68 | 49.75 | 25.83 | 33.00 | | **Ours** | **42.70** | **50.71** | **45.10** | **51.52** | **41.65** | **46.58** | **51.26** | **32.04** | **37.81** | | **Upper Bounds:** | **44.82** | **67.69** | **51.68** | **54.27** | **58.89** | **56.58** | **55.10** | **56.73** | **56.24** | Table 1: Weakly incremental 3D object detection (mAP@0.25) on SUN RGB-D validation set. All methods listed are first trained on base classes \(|C_{\text{base}}| = 10 - |C_{\text{novel}}|\) before incremental learning novel classes \(|C_{\text{novel}}|\). ↑ means the higher, the better. ### 4.2 Comparison with Existing Methods We construct several baseline methods to study this task: 1) **Base-training** directly train the 3D detector on base classes. 2) **Fine-tuning** tune the whole model (except the base classifier) and a new classifier for the \(C_{\text{novel}}\). 3) **Freeze-and-add** freeze the backbone, followed by adding a new classification head and train only the new head on novel classes. Additionally, we modify the training of **SDCoT** \cite{zhao2022sdcot} to fit our weakly incremental learning setting. For a fair comparison, all the training settings, e.g., learning rate, optimizer, batch size, etc., are the same for all experiments. To make thorough evaluations, we compare our method with all the above mentioned methods under different class-incremental settings on SUN RGB-D (Tab.1) and ScanNet (Tab.2). One shall notice that under different settings, the baseline methods either lead to catastrophic forgetting or failure to learn novel concepts. For instance, when we evaluate the methods on SUN RGB-D with \(|C_{\text{novel}}| = 5\), fine-tuning only achieves 1.11 mAP\(_{\text{base}}\), and freeze-and-add achieves 5.99 mAP\(_{\text{novel}}\). The former suffers from severe catastrophic forgetting on base classes, while the latter cannot learn new classes effectively. Additionally, it can be shown that our method can also surpass SDCoT, which achieves 49.27% mAP\(_{\text{base}}\), 34.08% mAP\(_{\text{novel}}\), and 41.68%mAP\(_{\text{all}}\) when \(|C_{\text{novel}}| = 5\), while our framework achieves 51.52% mAP\(_{\text{base}}\), 41.65%mAP\(_{\text{novel}}\) (+7.57%), 46.58% mAP\(_{\text{all}}\) (+4.9%) under the same task setting on SUN RGB-D dataset. Compared to SDCoT, which experiences significant performance degradation when introducing novel classes, our method achieves a balance between base and novel classes across different class-incremental settings. These phenomena are prevalent and can be observed through experiments conducted on both datasets. | Method | \(|C_{\text{novel}}| = 6\) | \(|C_{\text{novel}}| = 9\) | \(|C_{\text{novel}}| = 12\) | |--------------|-----------------|-----------------|-----------------| | | mAP\(_{\text{base}}\) | mAP\(_{\text{novel}}\) | mAP\(_{\text{all}}\) | mAP\(_{\text{base}}\) | mAP\(_{\text{novel}}\) | mAP\(_{\text{all}}\) | mAP\(_{\text{base}}\) | mAP\(_{\text{novel}}\) | mAP\(_{\text{all}}\) | | base-training | 51.01 | - | 58.37 | - | 64.70 | - | | fine-tuning | 1.66 | 27.42 | 10.24 | 2.42 | 20.72 | 11.57 | 3.96 | 17.32 | 12.87 | | freeze-and-add | 50.33 | 1.96 | 34.21 | 58.10 | 2.08 | 30.09 | 63.30 | 1.56 | 22.14 | | SDCoT | 38.97 | 23.45 | 33.80 | 47.46 | 20.07 | 33.77 | 51.99 | 16.83 | 28.55 | | **Ours** | **41.26** | **30.17** | **37.56** | **49.61** | **29.82** | **39.72** | **52.94** | **27.37** | **35.89** | | **Upper Bounds:** | **52.85** | **61.31** | **55.67** | **59.40** | **51.73** | **55.56** | **63.59** | **51.40** | **55.46** | Table 2: Weakly incremental 3D object detection (mAP@0.25) on ScanNet validation set. All methods listed are first trained on base classes \(|C_{\text{base}}| = 18 - |C_{\text{novel}}|\) before incremental learning novel classes \(|C_{\text{novel}}|\). ↑ means the higher, the better. ### 4.3 Ablation Study and Analysis In this section, we organize ablation studies to study the effectiveness of the proposed components. Without further specification, the following experiments are conducted on SUN RGB-D under the \(|C_{\text{novel}}| = 5\) setting. #### Effectiveness of Pseudo Label Refinement (PRF) To make a better comparison, we include several baseline methods, including directly training with the coarse 3D pseudo labels (“.”), Non-Maximum Suppression \cite{neubeck2006non} (“NMS”), and PRF without the Binary Classification Head (“PRF w/o BCH”) in Tab.3. It can be seen that the full model of our proposed PRF efficiently improves the detection performance of novel classes (+3.25% mAP\(_{\text{novel}}\)). Since NMS is initially designed to drop duplicated box estimations, it cannot handle the challenge of noisy 3D pseudo labels. generated from 2D box estimations well. Additionally, BCH can efficiently select pseudo-labels with higher quality, and further improve the detection performance (+0.96% mAP\textsubscript{novel}). **The Input of Pseudo Label Refinement (PRF).** In Tab.4 we investigate the input design of the PRF module. We notice that using either the coarse 3D pseudo boxes’ spatial coordinates or the contextual information from the input point cloud will severely downgrade the detection performance, since either input is insufficient to provide adequate information to refine the coarse 3D pseudo boxes. Whereas, the model achieves the optimal performance when both are used as the input. | Pseudo label denosing | mAP\textsubscript{base} ↑ | mAP\textsubscript{novel} ↑ | mAP\textsubscript{all} ↑ | |----------------------|--------------------------|--------------------------|------------------------| | - | 50.58 | 37.44 | 44.01 | | NMS | 50.78 | 37.35 | 44.07 | | PRF w/o BCH | 51.20 | 40.69 | 45.95 | | PRF | 51.52 | 41.65 | 46.58 | Table 3: **Effectiveness of pesudo label refinement.** We analyze whether the removal of the pseudo label refinement module (PRF) affects weakly incremental learning performance on the SUN RGB-D dataset. | Input of PRF | mAP\textsubscript{base} ↑ | mAP\textsubscript{novel} ↑ | mAP\textsubscript{all} ↑ | |--------------|--------------------------|--------------------------|------------------------| | box coord | 50.58 | 37.44 | 44.01 | | point cloud | 49.73 | 23.02 | 36.38 | | ✓ | 45.43 | 8.64 | 27.04 | | ✓ | 51.52 | 41.65 | 46.58 | Table 4: **Ablation experiments on PRF.** The model achieves the best results only when both the positional information of the bounding box and the contextual information from the point cloud are taken into account. | Method | Vanilla | Ours | |--------------|---------|------| | Faster RCNN | 47.72 | 50.63 | | Grounding Dino | 47.78 | 51.52 | | 2D-Oracle | 50.38 | 52.58 | Table 5: **Robustness to different 2D teachers.** We organize ablation studies to validate the robustness of our method to different 2D teachers. "Vanilla" denotes the baseline without PLR (details in Sec. 3.3), CKT and RKD (details in Sec. 3.4). **Robustness to different 2D teachers.** To validate the robustness of our approach across different 2D teachers, we employed three distinct 2D teachers in our framework: "Faster R-CNN (Girshick [2015])", "Ground Dino (Liu et al. [2023a])", and 2D human annotations ("2D Oracle"). Specifically, we train "Faster R-CNN" using 2D box annotations from the SUN RGB-D dataset (Song et al. [2015]), and we directly infer "Ground Dino" on the image from SUN RGB-D dataset without any fine-tuning. "2D Oracle" represents the results annotated by human experts on these images. The results in Tab.5 demonstrate improvements achieved by our approach with each of the 2D teachers. Our method not only improves the performance of existing detectors, such as achieving a +3.74% improvement on the base classes and a +9.49% improvement on the new classes for Ground Dino, but it also demonstrates a significant enhancement of nearly +7% across all categories when applied to manually annotated 2D results, which makes training 3D network using 2D prompts a viable possibility. **Effectiveness of Bipartite Cross-modal Knowledge Transfer.** In Tab.6 we compare our proposed Cross-modal Knowledge Transfer (CKT) strategy with the “one-to-many” assignment strategy built within VoteNet, and the baseline method without \( L_{ckt} \). One can see that the “one-to-many” strategy performs even worse than the baseline method without the feature-level supervision (-0.35% mAP\textsubscript{novel}), which is because of the noisy regional representations caused by occlusion of objects in 2D images. Meanwhile, our proposed CKT is able to help \( S^{3D} \) learn robust novel knowledge representations (+1.43% mAP\textsubscript{novel}). **Comparison with Other Knowledge Distillation Strategies.** We conduct experiments in Tab.7 to compare the effectiveness of our proposed Reweighting Knowledge Distillation (RKD) strategy with other commonly used knowledge distillation strategies. To be specific, (Hinton et al. [2015]) computes the Kullback-Leibler (KL) divergence, while (Zhao & Lee [2022]) computes the \( l_2 \) distance of the semantic logits for each proposal between the teacher and student model. As shown in Tab.7, our proposed RKD achieves a higher performance for both base (51.52 mAP\textsubscript{base}) and novel (41.65 mAP\textsubscript{novel}) classes. Table 6: The performance of cross-modal knowledge transfer (CKT) by bipartite matching. We compare the CKT utilizing bipartite matching with the unmatched approach. "-" denotes the absence of CKT. | Matching Strategy | mAP\text{base} ↑ | mAP\text{novel} ↑ | mAP\text{all} ↑ | |-------------------|-----------------|-----------------|----------------| | - | 51.28 | 40.22 | 45.75 | | One-to-Many | 50.76 | 39.87 | 45.32 | | One-to-One (ours) | 51.52 | 41.65 | 46.58 | Table 7: Effectiveness of reweighting knowledge distillation (RKD) for weakly incremental 3D object detection. We compare our proposed RKD with other commonly used knowledge distillation manner. "-" denotes that no distillation technology is used. | Distillation | mAP\text{base} ↑ | mAP\text{novel} ↑ | mAP\text{all} ↑ | |------------------|-----------------|-----------------|----------------| | Hinton et. al. | 51.13 | 39.14 | 45.13 | | Zhao et. al. | 50.41 | 41.11 | 45.76 | | RKD (ours) | 51.52 | 41.65 | 46.58 | Figure 5: Visualization of detection results. Our proposed method is able to generate tight bounding boxes for both novel classes and base classes in these complex and diverse scenes. 3D ground truth annotations for these scenes are marked for base and novel classes respectively. 4.4 Qualitative Results We showcase some qualitative results of our proposed methods on SUN RGB-D (Song et al., 2015) and ScanNet (Dai et al., 2017) in Fig. 5. One can see that our proposed method is capable of generating tight bounding boxes for both novel and base classes. 5 Conclusions and Limitations In this paper, for the first time, we attempt to address Weakly Incremental 3D object Detection, dubbed W13D, which is a new task to study how to introduce both the continuous localization and recognition ability of novel classes to a 3D detector through cost-effective 2D visual prompts. By learning from both inter- and intra-modal teachers, we propose (1) the pseudo label denoising technology to improve the quality of noisy 3D pseudo labels generated from visual prompts, and (2) concept representation learning in feature space for both base and novel classes. Experiments on SUN-RGBD and ScanNet validate that our proposed framework surpasses all baselines, including the previous approach to class-incremental 3D object detection. However, our method has the following limitations: 1) the proposed framework currently cannot handle the novel categories that are not included in the vocabulary of 2D models. 2) there is still a gap between our results and those obtained using 3D annotations for novel classes. We leave them for future works and we fervently aspire that our endeavors in the realm of label-efficient 3D class-incremental learning tasks will spark inspiration and fuel future explorations in this community. REFERENCES Fabio Cermelli, Dario Fontanel, Antonio Tavera, Marco Ciccone, and Barbara Caputo. Incremental learning in semantic segmentation from image labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4371–4381, 2022. Xinlei Chen and Kaiming He. Exploring simple siamese representation learning, 2020. Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5828–5839, 2017. Tao Feng, Mang Wang, and Hangjie Yuan. Overcoming catastrophic forgetting in incremental object detection via elastic response distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9427–9436, 2022. Ross Girshick. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 1440–1448, 2015. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Harold W Kuhn. The hungarian method for the assignment problem. Naval research logistics quarterly, 2(1-2):83–97, 1955. Wenqi Liang, Gan Sun, Chenxi Liu, Jiahua Dong, and Kangru Wang. I3dod: Towards incremental 3d object detection via prompting. arXiv preprint arXiv:2308.12512, 2023. Chuang Lin, Peize Sun, Yi Jiang, Ping Luo, Lizhen Qu, Gholamreza Haffari, Zehuan Yuan, and Jianfei Cai. Learning object-language alignments for open-vocabulary object detection. In The Eleventh International Conference on Learning Representations, 2022. Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, and Lei Zhang. Grounding dino: Marrying dino with grounded pre-training for open-set object detection, 2023a. Yaoyao Liu, Bernt Schiele, Andrea Vedaldi, and Christian Rupprecht. Continual detection transformer for incremental object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 23799–23808, 2023b. Ze Liu, Zheng Zhang, Yue Cao, Han Hu, and Xin Tong. Group-free 3d object detection via transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2949–2958, 2021. Yuheng Lu, Chenfeng Xu, Xiaobao Wei, Xiaodong Xie, Masayoshi Tomizuka, Kurt Keutzer, and Shanghang Zhang. Open-vocabulary point-cloud object detection without 3d annotation. arXiv preprint arXiv:2304.00788, 2023. Ishan Misra, Rohit Girdhar, and Armand Joulin. An end-to-end transformer model for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2906–2917, 2021. Alexander Neubeck and Luc Van Gool. Efficient non-maximum suppression. In 18th international conference on pattern recognition (ICPR’06), volume 3, pp. 850–855. IEEE, 2006. Can Peng, Kun Zhao, and Brian C Lovell. Faster ilod: Incremental learning for object detectors based on faster rcnn. Pattern Recognition Letters, 140:109–115, 2020. Liang Peng, Senbo Yan, Boxi Wu, Zheng Yang, Xiaofei He, and Deng Cai. Weakm3d: Towards weakly supervised monocular 3d object detection. In International Conference on Learning Representations, 2022.
NSVtmmzeRB
Are there any architectural / data preprocessing differences between the diffusion models (e.g. EDM) and the proposed GeoBFN? Are all improvements in performance attributable to the new training / sampling algorithm given by the Bayesian Flow Networks formulation [1]? Though the derivation is different, the training loss (Eq. 19) ultimately looks very similar to a diffusion model loss.
Unified Generative Modeling of 3D Molecules via Bayesian Flow Networks Yuxuan Song\textsuperscript{1}\textsuperscript{*}, Jingjing Gong\textsuperscript{1}\textsuperscript{*}, Hao Zhou\textsuperscript{1}, Mingyue Zheng\textsuperscript{2}, Jingjing Liu\textsuperscript{1} & Wei-Ying Ma\textsuperscript{1} \textsuperscript{1} Institute of AI Industry Research (AIR), Tsinghua University \textsuperscript{2} Shanghai Institute of Materia Medica, Chinese Academy of Sciences \{songyuxuan, gongjingjing, zhouhao, maweiying\}@air.tsinghua.edu Abstract Advanced generative model (\textit{e.g.}, diffusion model) derived from simplified continuity assumptions of data distribution, though showing promising progress, has been difficult to apply directly to geometry generation applications due to the multi-modality and noise-sensitive nature of molecule geometry. This work introduces Geometric Bayesian Flow Networks (GeoBFN), which naturally fits molecule geometry by modeling diverse modalities in the differentiable parameter space of distributions. GeoBFN maintains the SE(3) invariant density modeling property by incorporating equivariant inter-dependency modeling on parameters of distributions and unifying the probabilistic modeling of different modalities. Through optimized training and sampling techniques, we demonstrate that GeoBFN achieves state-of-the-art performance on multiple 3D molecule generation benchmarks in terms of generation quality (90.87% molecule stability in QM9 and 85.6% atom stability in GEOM-DRUG). GeoBFN can also conduct sampling with any number of steps to reach an optimal trade-off between efficiency and quality (\textit{e.g.}, 20× speedup without sacrificing performance). 1 Introduction Molecular geometries can be represented as three-dimensional point clouds, characterized by their Cartesian coordinates in space and enriched with descriptive features. For example, proteins can be represented as proximity spatial graphs \cite{Jing2021} and molecules as atomic graphs in 3D \cite{Schutt2017}. Thus, learning geometric generative models has the potential to benefit scientific discoveries such as material and drug design. Recent progress in deep generative modeling has paved the way for geometric generative modeling. For example, Gebauer et al. (2019); Luo & Ji (2021) and Satorras et al. (2021a) use autoregressive models and flow-based models, respectively, for generating 3D molecules in-silico. Most recently, inspired by the huge success of diffusion model (DM) in image generation Meng et al. (2022); Ho et al. (2020) and beyond Li et al. (2022), DM incorporating geometric symmetries has been widely explored in the field of geometry generation Hoogeboom et al. (2022); Xu et al. (2023). Figure 1: The framework of GeoBFN *Equal Contribution. Correspondence to Hao Zhou(zhouhao@air.tsinghua.edu). \textsuperscript{1}The scores are reported at 1k sampling steps for fair comparison, and our scores could be further improved if sampling sufficiently longer steps. However, two major challenges remain in directly applying DM to molecule geometry: multi-modality and noise sensitivity. The multi-modality issue refers to the dependency on diverse data forms to effectively depict the atomic-level geometry of a molecule. For instance, the continuous variable of atom coordinates is essential for describing the spatial arrangement, while either the discretised atom charge or categorical atom types are employed to completely determine the molecule’s composition. Noise sensitivity refers to the fact that applying noise or perturbing the atom coordinates will not only change the value of the variable but also have a significant impact on the relationship among different atoms as the Euclidean distances are also changed. Therefore, a small noise on atom coordinates could bring a sudden drop of the signal at the molecule level. To alleviate these issues, Xu et al. (2023) introduces a latent space for alleviating the inconsistency of unified Gaussian diffusion on different modalities. Anand & Achim (2022) propose to use decomposed modeling of different modalities. Peng et al. (2023) use different noise schedulers for different modalities to accommodate noise sensitivity. However, these methods either depend on the sophisticated and artifact-filled design or lack of guarantee or constraint on the designed space. In this work, we propose Geometric Bayesian Flow Networks (GeoBFN) to model 3D molecule geometry in a principally different way. Bayesian Flow Networks Graves et al. (2023) (BFN) is a novel generative model developed quite recently. Taking a unique approach by incorporating Bayesian inference to modify the parameters of a collection of independent distributions, brings a fresh perspective to geometric generative modeling. Firstly, GeoBFN uses a unified probabilistic modeling formulation for different modalities in the molecule geometry. Secondly, regarding the variable of 3D atom coordinates, the input variance for BFNs is considerably lower than DMs, leading to better compatibility with the inherent noise sensitivity. Further, we bring the geometry symmetries into the Bayesian update procedure through an equivariant inter-dependency modeling module. We also demonstrate that the density function of implied generative distribution is SE-(3) invariant and the generative process of iterative updating is roto-translational equivariant. Thirdly, with BFN’s powerful probabilistic modeling capacity, 3D molecule geometry representation can be further optimized into a representation with only two similar modalities: discretised charge and continuous atom coordinates. The mode-redundancy issue on discretised variable in the original BFNs is fixed by an early mode-seeking sampling strategy in GeoBFN. With operating on the space with less variance, GeoBFN could sample with any number of steps which provides a superior trade-off between efficiency and quality, which leads to a $20\times$ speedup with competitive performance. Besides, GeoBFN is a general framework that can be easily extended to other molecular tasks. We conduct thorough evaluations of GeoBFN on multiple benchmarks, including both unconditional and property-conditioned molecule generation tasks. Results demonstrate that GeoBFN consistently achieves state-of-the-art generation performance on molecule stability and other metrics. Empirical studies also show a significant improvement in controllable generation and demonstrate that GeoBFN enjoys a significantly higher modeling capacity and inference efficiency. 2 PRELIMINARIES 2.1 SE-(3) INVARIANT DENSITY MODELING To distinguish geometry representation and the atomic property features, we use the tuple $g = \langle x, h \rangle$ to represent the 3D molecules. Note here $x = (x^1, \ldots, x^N) \in \mathbb{R}^{N \times 3}$ is the atom coordinate matrix, and $h = (h^1, \ldots, h^N) \in \mathbb{R}^{N \times d}$ is the node feature matrix, e.g., atomic types and charges. Density estimation on the 3D molecules should satisfy specific symmetry conditions of the geometry. In this work, we focus on the transformations $T_g$ in the Special Euclidean group (SE-(3)), i.e., the group of rotation and translation in 3D space, where transformations $T_g$ can be represented by a translation $t$ and an orthogonal matrix rotation $R$. Note for a generative model on molecule geometry with underlying density function $p_\theta(\langle x, h \rangle)$, the likelihood should not be influenced by the rotation or translation of the entire molecule, which means the likelihood function should be SE-(3) invariant on the input coordinates, i.e., $p_\theta(\langle x, h \rangle) = p_\theta(\langle Rx + t, h \rangle)$. 2.2 BAYESIAN FLOW NETWORKS The Bayesian Flow Networks (BFNs) are based on the following latent variable models: for learning the probability distribution $p_\theta$ over $g$, a series of noisy versions $\langle y_1, \cdots, y_n \rangle$ of $g$ are introduced as latent variables. And then the variational lower bound of likelihood is optimized: \[ \log p_\theta(g) \geq \mathbb{E}_{y_1,\ldots,y_n \sim q} \left[ \log \frac{p_\phi(g | y_1,\ldots,y_n)}{q(y_1,\ldots,y_n | g)} \right] = -D_{KL}(q || p_\phi(y_1,\ldots,y_n)) + \mathbb{E}_{y_1,\ldots,y_n \sim q} \log [p_\phi(g | y_1,\ldots,y_n)] \] And \( q \) is namely the variational distribution. The prior distribution of latent variables is usually organized autoregressively, i.e., \( p_\phi(y_1,\ldots,y_n) = p_\phi(y_1)p_\phi(y_2 | y_1)p_\phi(y_n | y_{n-1},\ldots,y_1) \) which also implies the data generation procedure, i.e., \( y_1 \rightarrow \cdots \rightarrow y_n \rightarrow g \) (Note: this procedure only demonstrates the generation order, yet does NOT imply Markov property for the following derivative). One widely adopted intuition for the generation process is that the information of the data samples should progressively increase along with the above Markov chain, e.g., noisier images to cleaner images. The key motivation of BFNs is that the information along the latent variables should change as smoothly as possible for all modalities including discretized and discrete variables. To this end, BFNs operate on the distributions in the parameter space, in contrast to the sample space. We introduce components of BFNs one by one (Fig. 2a). Firstly, the variational distribution \( q \) is defined by the following form: \[ q(y_1,\ldots,y_n | g) = \prod_{i=1}^{n} p_S(y_i | g; \alpha_i) \] \( p_S(y_i | g; \alpha_i) \) is termed as the sender distribution, which could be seen as adding noise to the data according to a predefined accuracy \( \alpha_i \). Secondly, for the definition of \( p_\phi \), BFNs will first transfer the noisy sample \( y \) to the parameter space, obtaining \( \theta \), then apply Bayesian update in the parameters space and transfer back to the noisy sample space at last. To clarify, \( \theta \) refers to the parameter of distributions in the sample space, e.g., the mean/variance for Gaussian distribution or probabilities for categorical distribution. In the scope of BFNs, the distributions on the sample space are factorized by default, e.g., \( p(g | \theta) = \prod_{d=1}^{D} p(g^{(d)} | \theta^{(d)}) \). Thirdly, a neural network \( \Phi \) takes \( \theta \) as input and aims to model the dependency among different dimensions hence to recover the distribution of the original sample \( g \). The output of neural network \( \Phi(\theta) \) still lies in the parameter space, and we termed it as the parameter of output distribution \( p_O \), where \( p_O(y|\theta; \phi) = \prod_{d=1}^{D} p_O(y^{(d)} | \Phi(\theta)^{(d)}) \). To map the noisy sample \( y \) to the input space, Bayesian update is applied to \( \theta \): \[ \theta_i \leftarrow h(\theta_{i-1}, y_i, \alpha_i), \] \( h \) is called Bayesian update function. The distribution over \( (\theta_0,\ldots,\theta_{n-1}) \) is then defined by the Bayesian update distribution via marginalizing out \( y \): \[ p_\phi(\theta_0,\ldots,\theta_{n-1}) = p(\theta_0) \prod_{i=1}^{n} p_U(\theta_i | \theta_{i-1}; \alpha_i), \] where \( p(\theta_0) \) is a simple prior for ease of generation, e.g., standard normal, and \( p_U \) could be obtained from Eq. 3 \[ p_U(\theta_i | \theta_{i-1}; \alpha_i) = \mathbb{E}_{p_R(y_i|\theta_{i-1}; \alpha_i)} \delta(\theta_i - h(\theta_{i-1}, y_i, \alpha_i)), \] \( \delta \) being the Dirac delta distribution. \( p_R(y_i|\theta_{i-1}; \alpha_i) = \mathbb{E}_{p_O(x_i|\theta_{i-1}; \phi)} p_S(y_i | x_i; \alpha_i) \) and is also called the as receiver distribution. At last we map \( \Phi(\theta) \) back to the noisy sample space by combining the known form, accuracy of \( P_S \) and marginalizing out \( y \): \[ p_\phi(y_1,\ldots,y_n) = p_\phi(y_1) \prod_{i=2}^{n} p_\phi(y_i | y_{1:i-1}) = \prod_{i=1}^{n} p_\phi(y_i | \theta_{i-1}) \] \[ = \prod_{i=1}^{n} p_O(x_i|\theta_{i-1}; \phi) [p_S(y_i | x_i; \alpha_i)], \] where we use $\theta_{0:n-1}$ to abbreviate $(\theta_0, \ldots, \theta_{n-1})$, and $y$ similar. Till now, we have defined $q$, $p_\phi(y_1, \ldots, y_n)$, and $p_\phi(g | y_1, \ldots, y_n)$ is simply $p_O(g | \theta_n)$ on each sample, thus Eq[1] can be estimated. 3 METHODOLOGY 3.1 SE-(3) INvariant Geometry Density Modeling As discussed in Sec.2.1 for a generative model on the 3D molecule geometry, it is crucial to hold the SE-(3) invariant conditions. Recall the mathematics formula of the geometries $g = (x, h)$, we denote the latent variable, e.g., noisy samples, of $g$ as $y^g$. We are interested in applying the SE-(3) invariant conditions to the probabilistic model $p_\phi$. To this end, we need to first reformulate the likelihood function: $$p_\phi(g) = p_\phi(\langle x, h \rangle) = \int_{y^g_1, \ldots, y^g_n} p_\phi(g | y^g_1, \ldots, y^g_n)p_\phi(y^g_1, \ldots, y^g_n)dy^g_1 \ldots dy^g_n.$$ (7) With $\theta^x$ and $y^x$ and all the distributions defined in the same way as above, we focus on the variables corresponds to $x$ in the geometry $y^g$. Then we have the following theorem: **Theorem 3.1. (SE-(3) Invariant Condition)** • With the $\theta^x, y^x$, $x$ constrained in the zero Center of Mass(CoM) space [Köhler et al., 2020; Xu et al., 2022], the likelihood function $p_\phi$ is translational invariant. • When the following properties are satisfied, the likelihood function $p_\phi$ is roto-invariant: $$p_O(x' | \theta^x_{i-1}; \phi) = p_O(R(x') | R(\theta^x_{i-1}); \phi); p_S(y^x | x'; \alpha) = p_S(R(y^x) | R(x'); \alpha);$$ $$h(R(\theta^x_{i-1}), R(y^x_i), \alpha_i) = Rh(\theta^x_{i-1}, y^x_i, \alpha_i); p(x' | \theta^x_0) = p(R(x') | \theta^x_0), \forall \text{ orthogonal } R$$ **Proposition 3.2.** With the condition in Theorem 3.1 satisfied, the evidence lower bound objective in Eq.[7] i.e., $$L_{VLB}(x) = \mathbb{E}_{p_\phi(\theta^x_0, \ldots, \theta^x_n)} \left[ \sum_{i=1}^{n} D_{KL}(p_S(\cdot | x; \alpha_i) \| p_R(\cdot | \theta^x_{i-1}; \alpha_i)) - \log p_\phi(x | \theta^x_n) \right],$$ (8) with the Bayesian update distribution $p_\phi(\theta^x_0, \ldots, \theta^x_n) = \prod_{i=1}^{n} p_U(\theta_i | \theta_{i-1}, x; \alpha_i)$ similar to Eq.[4]. And $p_R(\cdot | \theta^x_{i-1}; \alpha_i) = \mathbb{E}_{p_O(x' | \theta^x_{i-1}; \phi)} [p_S(y_i | x'; \alpha_i)], p_\phi(x | \theta^x_n) = p_O(x | \theta^x_n; \phi)$. Derivation from Eq.[7] to equation[8] is at Appendix C.4 if $p_U(\theta_i | \theta_{i-1}, x; \alpha_i) = p_U(R\theta_i | R\theta_{i-1}, Rx; \alpha_i)$, then $L_{VLB}(x)$ is also SE-(3) invariant. We leave the formal proof of Theorem 3.1 and Proposition 3.2 in Appendix C. 3.2 Geometric Bayesian Flow Networks Then we introduce the detailed formulation of geometric Bayesian flow networks (GeoBFN) based on the analysis in Sec.3.1. For describing a 3D molecule geometry $g = (x, h)$, various representations can be utilized for the node features. The atom types $h_t$ and atomic charges $h_c$ are commonly employed, with the former being discrete (categorical) and the latter being discretized (integer). Together with the continuous variable, e.g., atom coordinates $x$, the network module in the modeling of the output distribution of GeoBFN could be parameterized with an equivariant graph neural network (EGNN) [Satorras et al., 2021b] $\Phi$: $$\Phi(R\theta^x + t, [\theta^{ht}, \theta^{hc}]) = [R\theta'^x + t, \theta'^{ht}, \theta'^{hc}], \forall R, t$$ (9) where \( \Phi(\theta^x, [\theta^{h_x}, \theta^{h_c}]) = [\theta'^x, \theta'^{h_x}, \theta'^{h_c}] \). And then we introduce the necessary components to derive the objective in Eq.8. **Atom Coordinates \( x \) and Charge \( h_c \):** For the continuous and discretized variables, the input distribution is set as the factorized Gaussian distributions, where \( \theta := \{\mu, \rho\} \) the parameter of \( N(\cdot | \mu, \rho^{-1}I) \). For simplicity, we take \( x \) as an example to illustrate the common parts of the two variables. And \( \theta^x_0 \) is set as \( \{0, 1\} \). The sender distribution \( p_S \) is also an isotropic Gaussian distribution: \[ p_S(\cdot | x; \alpha I) = N(x, \alpha^{-1}I) \] (10) Given the nice property of isotropic Gaussian (proof given by Graves et al. [2023]), the simple form of Bayesian update function could be derived as: \[ h(\{\mu_{i-1}, \rho_{i-1}\}, y, \alpha) = \{\mu_i, \rho_i\}, \quad \text{Here} \quad \rho_i = \rho_{i-1} + \alpha, \quad \mu_i = \frac{\mu_{i-1}\rho_{i-1} + y\alpha}{\rho_i} \] (11) As shown in Eq.11, the randomness only exists in \( \mu \), and the corresponding Bayesian update distribution in Eq.8 is as: \[ p_U(\theta_i | \theta_{i-1}, x; \alpha) = N\left(\mu_i | \frac{\alpha x + \mu_{i-1}\rho_{i-1}}{\rho_i}, \frac{\alpha}{\rho_i^2}I\right) \] (12) The above discrete-time Bayesian update could be easily extended to continuous-time, with an accuracy scheduler defined as \( \beta(t) = \int_{t=0}^{t} \alpha(t') dt', t \in [0, 1] \). Given the accuracy additive property of \( p_U \) (proof given by Graves et al. [2023]), \[ E_{p_U(\theta_{i-1} | \theta_{i-2}, x; \alpha_a)}[p_U(\theta_i | \theta_{i-1}, x; \alpha_b)] = p_U(\theta_i | \theta_{i-2}, x; \alpha_a + \alpha_b), \] the Bayesian flow distribution could be obtained as: \[ p_F(\theta^x | x; t) = p_U(\theta^x | \theta_0, x; \beta(t)) \] (13) The key difference of atom coordinates \( x \) and charges \( h_c \) lies in the design of the output distribution. For continuous variable \( x \), the network module \( \Phi \) directly outputs an estimated \( \hat{x} = \Phi(\theta^g, t) \). Hence for timestep \( t \), the output distribution is \[ p_O(x' | \theta^g, t; \phi) = \delta(x - \Phi(\theta^g, t)) \] (14) While for discretized variable \( h_c \), the network module will output two variables, \( \mu_{h_c} \) and \( \ln \sigma_{h_c} \) with dimension equivalent to \( h_c \) which implies a distribution \( N(\mu_{h_c}, \sigma^2_{h_c}I) \). With a \( K \)-bins discretized variable, the support is split into \( K \) buckets with each bucket \( k \) centered as \( k_c = \frac{2k-1}{K} \) and left boundary as \( k_l = k_c - \frac{1}{K} \) and right boundary as \( k_r = k_c + \frac{1}{K} \). Then for each \( k \), the probability is the mass from \( k_l \) to \( k_r \), i.e., \( \int_{k_l}^{k_r} N(\mu_{h_c}, \sigma^2_{h_c}I) \). And the first and last bins are curated by making sure the sum of the probability mass is 1. Then the output distribution is: \[ p_O(h_c | \theta^g, t; \phi) = \prod_{d=1}^{D} p_O^{(d)}\left(k\left(h_c^{(d)}\right) | \theta^g, t; \phi\right), \] (15) where the function \( k(\cdot) \) maps the variable to the corresponding bucket. **Atom Types \( h_t \):** The atom types \( h_t \) are discrete variables with \( K \) categories, where the corresponding parameter space lies in probability simplex thus the procedure is slightly different from the others. The input distribution for \( h_t \) is \( p_I(h_t | \theta) = \prod_{d=1}^{D} \theta^{h_t(d)} \), where \( D \) is number if variables. And the input prior \( \theta^{h_t}_0 = \frac{1}{K} \), where \( \frac{1}{K} \) is the length \( KD \) vector whose entries are all \( \frac{1}{K} \). The sender distribution, could be derived with the central limit theorem, lies in the form of \[ p_S(y | h_t; \alpha) = N(y | \alpha(K e_{h_t} - 1), \alpha K I) \] (16) where \( 1 \) is a vector of ones, \( I \) is the identity matrix, and \( e_j \in \mathbb{R}^K \) is a vector defined as the projection from the class index \( j \) to a length \( K \) one-hot vector (proof given by Graves et al. [2023]). In other words, each element of \( e_j \) is defined as \( (e_j)_k = \delta_{jk} \), where \( \delta_{jk} \) is the Kronecker delta function. And \( e_{h_t} \overset{\text{def}}{=} (e_{h_t(1)}, \ldots, e_{h_t(D)}) \in \mathbb{R}^{KD} \). The Bayesian update function could be derived as \( h(\theta_{i-1}, y, \alpha) = \frac{e^{\theta_{i-1}}}{\sum_{k=1}^{K} e^{\theta_{i-1}} k} \) (proof given by Graves et al. (2023)). And similar to Eq. 13, the Bayesian flow distribution for \( h_t \) is as: \[ p_F(\theta_{ht} | h_t; t) = \mathbb{E}_{N(y_{ht} | \beta(t)(Ke_{ht}-1), \beta(t)K)} \delta(\theta_{ht} - \text{softmax}(y_{ht})) \] (17) With the network module \( \Phi \), the output distribution could be obtained as \[ p_O^{(d)}(k | \theta_g; t) = (\text{softmax}(\Phi^{(d)}(\theta_g, t)))_k, p_O(h_t | \theta; t) = \prod_{d=1}^{D} p_O^{(d)}(h_t^{(d)} | \theta_g; t) \] (18) Training Objective: By combining the different variables together, we could obtain the unified continuous-time loss for GeoBFN based on Eq. 25 to Eq. 41 in Graves et al. (2023) as: \[ L^\infty(g) = L^\infty(x, h_c, h_t) = \mathbb{E}_{t \sim U(0,1), \theta_g; g; t} \left[ \frac{\alpha_g(t)}{2} \| g - \Phi(\theta_g, t) \|^2 \right] \] \[ = \mathbb{E}_{t \sim U(0,1), \theta_g; g; t} \left[ \frac{\alpha_x(t)}{2} \| x - \Phi_x \|^2 + \frac{\alpha_h(t)}{2} \| h_c - \Phi_h_c \|^2 + \frac{\alpha_h(t)}{2} \| h_t - \Phi_h_t \|^2 \right] \] (19) Where \( \Phi \) is short for \( \Phi(\theta_g, t) \). The joint Bayesian flow distribution is decomposed as: \[ p_F(\theta_g | g; t) = p_F(\theta_x | x; t)p_F(\theta_h_c | h_c; t)p_F(\theta_h_t | h_t; t), \] (20) with \( \alpha_x, \alpha_h_c \) and \( \alpha_h_t \) refer to the corresponding accuracy scheduler (details provided by Graves et al. (2023)). And \( \Phi_x \) is defined the same as in Eq. 14 while \( \Phi_h_c \) is defined by the weighted average of different bucket centers with the output distribution in Eq. 15 as \( \left( \sum_{k=1}^{K} p_O^{(1)}(k | \theta, t) k_c, \ldots, \sum_{k=1}^{K} p_O^{(D)}(k | \theta, t) k_c \right) \); And for \( \Phi_h_t \), it is defined as the \( \sum_{k=1}^{K} p_O^{(d)}(k | \theta; t) k \) based on Eq. 18. Remark 3.3. The GeoBFN defined in the above formulation satisfied the SE(3)-invariant condition in Theorem 3.1. Sampling GeoBFN will generate samples follow the graphical model in the recursive procedure as illustrated in Fig. 2a, e.g., \( g' \sim p_O(\cdot | \theta_{i-1}) \rightarrow y \sim p_S(\cdot | g', \alpha) \rightarrow \theta_i = h(\theta_{i-1}, y, \alpha) \). 3.3 Overcome Noise Sensitivity in Molecule Geometry One key obstacle of applying diffusion models to 3D molecule generation is the noise sensitivity property of the molecule geometry. The property of noise sensitivity seeks to state the fact: When noise is incorporated into the coordinates and displaces them significantly from their original positions, the bond distance between certain connected atoms may exceed the bond length range [1]. Under these circumstances, the point cloud could potentially lose the critical chemical information inherently encoded in the bonded structures; Another perspective stems from the reality that when noise is added to the coordinates, the relationships (distance) between different atoms could alter at a more rapid pace, e.g. modifying the coordinates of one atom results in altering its distance to all other atoms. Thus, the intermediate steps’ structure in the generation procedure of diffusion models the intermediate steps’ structure might be uninformative. And the majority of the information being acquired in the final few steps of generation (as depicted in Fig. 3). A fundamental belief underpinning GeoBFN is that a smoother transformation during the generative process could result in a more favorable inductive bias according to (Graves et al., 2023). This process occurs within the parameter space of GeoBFN, which is regulated through the Bayesian update procedure. Specifically, samples exhibiting higher degrees of noise are assigned lesser weight during this update (refer to Eq. (11)). This approach consequently leads to a significant reduction in variance within the parameter space as (Graves et al., 2023), which in turn facilitates the smooth transformation of molecular geometries. As illustrated in Fig. 3, this is evidenced by the gradual convergence of the structure of the intermediary steps towards the final structure, thus underscoring the effectiveness of smoother transformation. ### 3.4 Optimized Discretised Variable Sampling Previous research (Hoogeboom et al., 2022; Xu et al., 2023; Wu et al., 2022) utilizes both the atom types \( h_t \) and charges \( h_c \) to represent the atomic properties. The \( h_c \) usually serves as an auxiliary loss for improving training which is not involved in determining the molecule graph during generation due to the insufficient modeling. However, there is redundant information between these two variables, since the \( h_t \) and \( h_c \) variables have a one-to-one mapping, e.g., the charge value 4 could be uniquely determined as the Carbon atom. We found that with advanced probabilistic modeling on discretized data, GeoBFN could conduct training and sampling only with \( x \) and \( h_c \). However, there exists a counterexample for the objective in Eq. (9) and the output distribution during sampling as in Eq. (15). As shown in Fig 5, the boundary condition for clamping the cumulative probability function in the bucket could cause the mismatch, e.g., the true density should be centered in the center bucket while the output distribution instead put the most density in the first and last buckets which cause the mode-redundancy as shown in upper-left in Fig. 5. Though the weighted sum in Eq. (19) is optimized, the sampling procedure will rarely sample the center buckets. And such cases could be non-negligible in our scenarios, especially when the number of bins is small for low dimensional data. To alleviate this issue, we instead update the output distribution in the sampling procedure to: \[ \hat{k}_c(\theta, t) = \text{NEAREST_CENTER}\left( \sum_{k=1}^{K} p_O^{(1)}(k | \theta, t) k_c, \ldots, \sum_{k=1}^{K} p_O^{(D)}(k | \theta, t) k_c \right) \] Function NEAREST_CENTER compares inputs to the center bins \( \vec{k}_c = (k_c^{(1)}, \ldots, k_c^{(D)}) \), and return the nearest center for each input value. The updated distribution is unbiased towards the training objective and also reduce the variance during generation which could be found in the trajectory of Fig. 5. ### 4 Experiments #### 4.1 Experiment Setup **Task and Datasets** We focus on the 3D molecule generation task following the setting of prior works (Gebauer et al., 2019; Luo & Ji, 2021; Satorras et al., 2021a; Hoogeboom et al., 2022; Wu et al., 2022). We consider both Unconditional Molecular Generation which assesses the capability to learn the underlying molecular data distribution and generate chemically valid and structurally diverse molecules and the Conditional Molecule Generation tasks which evaluate the capacity of generating molecules with desired properties. For Conditional Molecule Generation, we implement a conditional version GeoBFN with the details in the Appendix. The widely adapted QM9 (Ramakrishnan et al., 2014) and the GEOM-DRUG (Gebauer et al., 2019; 2021) with large molecules are used for the experiments. And the data configurations directly follow previous work (Anderson et al., 2019; Hoogeboom et al., 2022; Xu et al., 2023). **Evaluation Metrics** The evaluation configuration follows the prior works (Hoogeboom et al., 2022; Wu et al., 2022; Xu et al., 2023). For the Unconditional Molecular Generation, the bond types are first predicted (single, double, triple, or none) based on pair-wise atomic distance and atom types in the 10000 generated molecular geometries (Hoogeboom et al., 2022). With the obtained molecular graph, we evaluate the quality by calculating both atom stability and molecule stability metrics. Besides, the validity (based on RDKit) and uniqueness are also reported. Regarding the --- 2The official implementation is at https://github.com/AlgoMole/GeoBFN Table 1: Results of atom stability, molecule stability, validity, validity × uniqueness (V×U), and novelty. A higher number indicates a better generation quality. The results marked with an asterisk were obtained from our own tests. And GeoBFN$_k$ denote the results of sampling the molecules with a specific number of steps $k$. | # Metrics | Atom Sta (%) | Mol Sta (%) | QM9 Valid (%) | V×U (%) | Novelty (%) | DRUG Atom Sta (%) | Valid (%) | |-----------|--------------|-------------|----------------|---------|-------------|--------------------|------------| | Data | 99.0 | 95.2 | 97.7 | 97.7 | - | 86.5 | 99.9 | | ENF | 85.0 | 4.9 | 40.2 | 39.4 | - | - | - | | G-Schnet | 95.7 | 68.1 | 85.5 | 80.3 | - | - | - | | GDM-AUG | 97.6 | 71.6 | 90.4 | 89.5 | 74.6 | 77.7 | 91.8 | | EDM | 98.7 | 82.0 | 91.9 | 90.7 | 58.0 | 81.3 | 92.6 | | EDM-Bridge| 98.8 | 84.6 | 92.0 | 90.7 | - | 82.4 | 92.8 | | GEOLDM | 98.9 ± 0.1 | 89.4 ± 0.5 | 93.8 ± 0.4 | 92.7 ± 0.5 | 57.0 | 84.4 | 99.3 | | GeoBFN$_{50}$ | 98.28 ± 0.1 | 85.11 ± 0.5 | 92.27 ± 0.4 | 90.72 ± 0.3 | 72.9 | 75.11 | 91.66 | | GeoBFN$_{100}$ | 98.64 ± 0.1 | 87.21 ± 0.3 | 93.03 ± 0.3 | 91.53 ± 0.3 | 70.3 | 78.89 | 93.05 | | GeoBFN$_{500}$ | 98.78 ± 0.8 | 88.42 ± 0.2 | 93.35 ± 0.2 | 91.78 ± 0.2 | 67.7 | 81.39 | 93.47 | | GeoBFN$_{1k}$ | 99.08 ± 0.06 | 90.87 ± 0.2 | 95.31 ± 0.1 | 92.96 ± 0.1 | 66.4 | 85.60 | 92.08 | | GeoBFN$_{2k}$ | 99.31 ± 0.03 | 93.32 ± 0.1 | 96.88 ± 0.1 | 92.41 ± 0.1 | 65.3 | 86.17 | 91.66 | Table 2: Mean Absolute Error for molecular property prediction with 500 sampling steps. A lower number indicates a better controllable generation result. | Property | $\alpha$ | $\Delta\varepsilon$ | $\varepsilon_{\text{HOMO}}$ | $\varepsilon_{\text{LUMO}}$ | $\mu$ | $C_v$ | |----------|----------|----------------------|-----------------------------|-----------------------------|------|-------| | Units | Bohr$^*$ | meV | meV | meV | D | J/K | | QM9* | 0.10 | 64 | 39 | 36 | 0.043| 0.040 | | Random* | 9.01 | 1470 | 645 | 1457 | 1.616| 6.857 | | N$_{\text{atoms}}$ | 3.86 | 866 | 426 | 813 | 1.053| 1.971 | | EDM | 2.76 | 655 | 356 | 584 | 1.111| 1.101 | | GEOLDM | 2.37 | 587 | 340 | 522 | 1.108| 1.025 | | GeoBFN | 2.34 | 577 | 328 | 516 | 0.998| 0.949 | Table 3: Ablation study, GeoBFN models molecule charge settings, the sampling step is set to 1,000. | Charge Feature | Atom Stable (%) | Mol Stable (%) | |----------------|-----------------|---------------| | discretised_basis | 99.08 | 90.87 | | continuous_basis | 98.97 | 89.94 | | discrete | 98.93 | 88.93 | | discrete + continuous | 98.96 | 89.33 | | discrete + discretised | 98.91 | 88.65 | Conditional Molecule Generation, we evaluate our conditional version of GeoBFN on QM9 with 6 properties: polarizability $\alpha$, orbital energies $\varepsilon_{\text{HOMO}}, \varepsilon_{\text{LUMO}}$ and their gap $\Delta\varepsilon$, Dipole moment $\mu$, and heat capacity $C_v$. Following previous work [Hoogeboom et al., 2022; Xu et al., 2023], the conditional GeoBFN is fed with a range of property $s$ to generate samples and the same pre-trained classifier $w$ is utilized to measure the property of generated molecule as $\hat{s}$. The Mean Absolute Error (MAE) between $s$ and $\hat{s}$ is calculated to measure whether the generated molecules is related to the conditioned property. Baselines GeoBFN is compared with several advanced baselines including G-Schnet [Gebauer et al., 2019], Equivariant Normalizing Flows (ENF) [Satorras et al., 2021a] and Equivariant Graph Diffusion Models (EDM) with its non-equivariant variant (GDM) [Hoogeboom et al., 2022]. Also with recent advancements, EDM-Bridge [Wu et al., 2022] which improves upon the performance of EDM by incorporating well-designed informative prior bridges and also GeoLDM [Xu et al., 2023] where a latent space diffusion model is applied are both included. To yield a fair comparison, all the method-agnostic configurations are set as the same. The implementation details could be found in Appendix B. Figure 4: QM9 Molecule Stability wrt. Sampling Steps Figure 5: 2D Synthetic case of optimized synthetic example. In the left columns, generated samples are in orange, and data points are in blue. 4.2 Main Results The results of Unconditional Molecular Generation can be found in Tab.1. We could observe that in both the QM9 and GEOM-DRUG datasets, GeoBFN achieves a new state-of-the-art performance regarding both the quality and diversity of the generated molecules which demonstrates the huge potential of GeoBFN on geometry generative modeling. The phenomenon demonstrates that the GeoBFN does not hold the tendency to collapse to the subset of training data which could imply a probabilistic generalization ability and could be useful for several application scenarios; The Conditional Molecule Generation results can be found in Tab.2. GeoBFN consistently outperforms other baseline models by an obvious margin in all conditional generation tasks. This clearly highlights the effectiveness and generalization capability of the proposed methods. 4.3 Any-step Sampling One notable property of GeoBFN is that training with the continuous-time loss, e.g., Eq.19, the sampling could be conducted with any steps without incurring additional training overhead. As shown in Tab.1, GeoBFN could get superior performance compared to several advanced models with only 50 steps during sampling which brings $20 \times$ speed-up during sampling due to the benefit of low variance parameter space. As we could find in Fig.4, with the sampling steps increasing from 50 to 4600, the molecule stability could be further boosted to approach the upper bound, e.g., 94.25% with 4000 steps. 4.4 Ablation Studies We conduct ablation studies on the effect of input modalities in Tab.3. We try different compositions and losses to represent the atom types, discretised basis refers to the case where the charge feature is used with discretised and the Gaussian basis, i.e., $\phi_j(x) = \exp\left(-\frac{(x-\mu_j)^2}{2\sigma^2}\right)$ is used as functional embedding for charge; continous basis only differ in that the continous loss is utilized. The discrete refers to including the one-hot type representation; discrete+continuous refers to both the one-hot type and charge are included while continuous loss is included; Similar is the discrete+continuous. With only discretised variable utilized, the performance is superior to including the discrete variable which implies powerful probabilistic modeling capacity and the benefits of applying similar modality. 5 Related Work Previous molecule generation studies have primarily focused on generating molecules as 2D graphs (Jin et al., 2018; Liu et al., 2018; Shi et al., 2020), but there has been increasing interest in 3D molecule generation. With the increasing interest in 3D molecule generation, G-Schnet and G-SphereNet (Gebauer et al., 2019; Luo & Ji, 2021) respectively, employ autoregressive techniques to create molecules in a step-by-step manner by progressively connecting atoms or molecular fragments. These frameworks have also been extended to structure-based drug design (Li et al., 2021; Peng et al., 2022; Powers et al., 2022). There are approaches use atomic density grids that generate the entire molecule in a single step by producing a density over the voxelized 3D space (Masuda et al., 2020). Most recently, the attention has shifted towards using DMs for 3D molecule generation (Hoogeboom et al., 2022; Wu et al., 2022; Peng et al., 2023; Xu et al., 2023), with successful applications in target drug generation (Lin et al., 2022), antibody design (Luo et al., 2022), and protein design (Anand & Achim, 2022; Trippe et al., 2022). However, our method is based on the Bayesian Flow Network (Graves et al., 2023) objective and hence lies in a different model family which fundamentally differs from this line of research in both training and generation. 6 Conclusion We introduce GeoBFN, a new generative framework for molecular geometry. GeoBFN operates in a differentiable parameter space for variables from different modalities. Also, the less variance in parameter space is naturally compatible with the noise sensitivity of molecule geometry. Given the appealing property, the GeoBFN achieves state-of-the-art performance on several 3D molecule generation benchmarks. Besides, GeoBFN can also conduct sampling with an arbitrary number of steps to reach an optimal trade-off between efficiency and quality (e.g., $20 \times$ speedup without sacrificing performance). ACKNOWLEDGMENTS The authors thank Yanru Qu for the helpful discussions and proofreading of the paper, as well as the anonymous reviewers for reviewing the draft. This work is supported by the National Science and Technology Major Project (2022ZD0117502), Natural Science Foundation of China (62376133) and Guoqiang Research Institute General Project, Tsinghua University (No. 2021GQG1012). REFERENCES Namrata Anand and Tudor Achim. Protein structure and sequence generation with equivariant denoising diffusion probabilistic models. *arXiv preprint arXiv:2205.15019*, 2022. Brandon Anderson, Truong Son Hy, and Risi Kondor. Cormorant: Covariant molecular neural networks. *Advances in neural information processing systems*, 32, 2019. Niklas Gebauer, Michael Gastegger, and Kristof Schütt. Symmetry-adapted generation of 3d point sets for the targeted discovery of molecules. *Advances in neural information processing systems*, 32, 2019. Niklas WA Gebauer, Michael Gastegger, Stefaan SP Hessmann, Klaus-Robert Müller, and Kristof T Schütt. Inverse design of 3d molecular structures with conditional generative neural networks. *arXiv preprint arXiv:2109.04824*, 2021. Alex Graves, Rupesh Kumar Srivastava, Timothy Atkinson, and Faustino Gomez. Bayesian flow networks. *arXiv preprint arXiv:2308.07037*, 2023. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. *arXiv preprint arXiv:2006.11239*, 2020. Emiel Hoogeboom, Victor Garcia Satorras, Clément Vignac, and Max Welling. Equivariant diffusion for molecule generation in 3d. In *International Conference on Machine Learning*, pp. 8867–8887. PMLR, 2022. Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Junction tree variational autoencoder for molecular graph generation. In *International conference on machine learning*, pp. 2323–2332. PMLR, 2018. Bowen Jing, Stephan Eismann, Patricia Suriana, Raphael John Lamarre Townshend, and Ron Dror. Learning from protein structure with geometric vector perceptrons. In *International Conference on Learning Representations*, 2021. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *3rd International Conference on Learning Representations*, 2014. Jonas Köhler, Leon Klein, and Frank Noe. Equivariant flows: Exact likelihood generative learning for symmetric densities. In *Proceedings of the 37th International Conference on Machine Learning*, 2020. Xiang Lisa Li, John Thickstun, Ishaan Gulrajani, Percy Liang, and Tatsunori Hashimoto. Diffusion-LM improves controllable text generation. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), *Advances in Neural Information Processing Systems*, 2022. URL https://openreview.net/forum?id=3s9IrEsjLyk. Yibo Li, Jianfeng Pei, and Luhua Lai. Structure-based de novo drug design using 3d deep generative models. *Chemical science*, 12(41):13664–13675, 2021. Haitao Lin, Yufei Huang, Meng Liu, Xuanjing Li, Shuiwang Ji, and Stan Z Li. Diffbp: Generative diffusion of 3d molecules for target protein binding. *arXiv preprint arXiv:2211.11214*, 2022. Qi Liu, Miltiadis Allamanis, Marc Brockschmidt, and Alexander Gaunt. Constrained graph variational autoencoders for molecule design. In *Advances in neural information processing systems*, 2018.